00:00:00.001 Started by upstream project "autotest-per-patch" build number 132523 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.021 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.022 The recommended git tool is: git 00:00:00.022 using credential 00000000-0000-0000-0000-000000000002 00:00:00.024 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.039 Fetching changes from the remote Git repository 00:00:00.043 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.062 Using shallow fetch with depth 1 00:00:00.062 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.062 > git --version # timeout=10 00:00:00.091 > git --version # 'git version 2.39.2' 00:00:00.091 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.117 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.117 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.249 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.262 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.275 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.275 > git config core.sparsecheckout # timeout=10 00:00:02.285 > git read-tree -mu HEAD # timeout=10 00:00:02.299 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.317 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.318 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.502 [Pipeline] Start of Pipeline 00:00:02.517 [Pipeline] library 00:00:02.519 Loading library shm_lib@master 00:00:05.695 Library shm_lib@master is cached. Copying from home. 00:00:05.747 [Pipeline] node 00:00:05.835 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest 00:00:05.837 [Pipeline] { 00:00:05.847 [Pipeline] catchError 00:00:05.851 [Pipeline] { 00:00:05.865 [Pipeline] wrap 00:00:05.873 [Pipeline] { 00:00:05.881 [Pipeline] stage 00:00:05.883 [Pipeline] { (Prologue) 00:00:05.903 [Pipeline] echo 00:00:05.905 Node: VM-host-SM17 00:00:05.911 [Pipeline] cleanWs 00:00:05.919 [WS-CLEANUP] Deleting project workspace... 00:00:05.919 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.926 [WS-CLEANUP] done 00:00:06.112 [Pipeline] setCustomBuildProperty 00:00:06.222 [Pipeline] httpRequest 00:00:08.894 [Pipeline] echo 00:00:08.895 Sorcerer 10.211.164.20 is alive 00:00:08.906 [Pipeline] retry 00:00:08.908 [Pipeline] { 00:00:08.922 [Pipeline] httpRequest 00:00:08.926 HttpMethod: GET 00:00:08.927 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.928 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.928 Response Code: HTTP/1.1 200 OK 00:00:08.929 Success: Status code 200 is in the accepted range: 200,404 00:00:08.930 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.717 [Pipeline] } 00:00:09.731 [Pipeline] // retry 00:00:09.738 [Pipeline] sh 00:00:10.020 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.036 [Pipeline] httpRequest 00:00:10.377 [Pipeline] echo 00:00:10.379 Sorcerer 10.211.164.20 is alive 00:00:10.389 [Pipeline] retry 00:00:10.391 [Pipeline] { 00:00:10.406 [Pipeline] httpRequest 00:00:10.410 HttpMethod: GET 00:00:10.411 URL: http://10.211.164.20/packages/spdk_a9e1e4309cdc83028f205f483fd163a9ff0da22f.tar.gz 00:00:10.412 Sending request to url: http://10.211.164.20/packages/spdk_a9e1e4309cdc83028f205f483fd163a9ff0da22f.tar.gz 00:00:10.413 Response Code: HTTP/1.1 200 OK 00:00:10.414 Success: Status code 200 is in the accepted range: 200,404 00:00:10.415 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_a9e1e4309cdc83028f205f483fd163a9ff0da22f.tar.gz 00:00:21.575 [Pipeline] } 00:00:21.593 [Pipeline] // retry 00:00:21.600 [Pipeline] sh 00:00:21.881 + tar --no-same-owner -xf spdk_a9e1e4309cdc83028f205f483fd163a9ff0da22f.tar.gz 00:00:24.426 [Pipeline] sh 00:00:24.705 + git -C spdk log --oneline -n5 00:00:24.705 a9e1e4309 nvmf: discovery log page updation change 00:00:24.705 2a91567e4 CHANGELOG.md: corrected typo 00:00:24.705 6c35d974e lib/nvme: destruct controllers that failed init asynchronously 00:00:24.705 414f91a0c lib/nvmf: Fix double free of connect request 00:00:24.705 d8f6e798d nvme: Fix discovery loop when target has no entry 00:00:24.725 [Pipeline] writeFile 00:00:24.740 [Pipeline] sh 00:00:25.021 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:25.032 [Pipeline] sh 00:00:25.311 + cat autorun-spdk.conf 00:00:25.311 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.311 SPDK_RUN_ASAN=1 00:00:25.311 SPDK_RUN_UBSAN=1 00:00:25.311 SPDK_TEST_RAID=1 00:00:25.311 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:25.318 RUN_NIGHTLY=0 00:00:25.320 [Pipeline] } 00:00:25.332 [Pipeline] // stage 00:00:25.347 [Pipeline] stage 00:00:25.349 [Pipeline] { (Run VM) 00:00:25.361 [Pipeline] sh 00:00:25.638 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:25.638 + echo 'Start stage prepare_nvme.sh' 00:00:25.638 Start stage prepare_nvme.sh 00:00:25.638 + [[ -n 2 ]] 00:00:25.638 + disk_prefix=ex2 00:00:25.638 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:25.639 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:25.639 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:25.639 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.639 ++ SPDK_RUN_ASAN=1 00:00:25.639 ++ SPDK_RUN_UBSAN=1 00:00:25.639 ++ SPDK_TEST_RAID=1 00:00:25.639 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:25.639 ++ RUN_NIGHTLY=0 00:00:25.639 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:25.639 + nvme_files=() 00:00:25.639 + declare -A nvme_files 00:00:25.639 + backend_dir=/var/lib/libvirt/images/backends 00:00:25.639 + nvme_files['nvme.img']=5G 00:00:25.639 + nvme_files['nvme-cmb.img']=5G 00:00:25.639 + nvme_files['nvme-multi0.img']=4G 00:00:25.639 + nvme_files['nvme-multi1.img']=4G 00:00:25.639 + nvme_files['nvme-multi2.img']=4G 00:00:25.639 + nvme_files['nvme-openstack.img']=8G 00:00:25.639 + nvme_files['nvme-zns.img']=5G 00:00:25.639 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:25.639 + (( SPDK_TEST_FTL == 1 )) 00:00:25.639 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:25.639 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:25.639 + for nvme in "${!nvme_files[@]}" 00:00:25.639 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:25.639 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:25.639 + for nvme in "${!nvme_files[@]}" 00:00:25.639 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:25.639 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.639 + for nvme in "${!nvme_files[@]}" 00:00:25.639 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:25.639 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:25.639 + for nvme in "${!nvme_files[@]}" 00:00:25.639 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:25.639 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.639 + for nvme in "${!nvme_files[@]}" 00:00:25.639 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:25.639 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:25.639 + for nvme in "${!nvme_files[@]}" 00:00:25.639 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:25.639 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:25.639 + for nvme in "${!nvme_files[@]}" 00:00:25.639 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:25.897 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.897 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:25.897 + echo 'End stage prepare_nvme.sh' 00:00:25.897 End stage prepare_nvme.sh 00:00:25.908 [Pipeline] sh 00:00:26.188 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:26.188 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:00:26.188 00:00:26.188 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:26.188 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:26.188 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:26.188 HELP=0 00:00:26.188 DRY_RUN=0 00:00:26.188 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:26.188 NVME_DISKS_TYPE=nvme,nvme, 00:00:26.188 NVME_AUTO_CREATE=0 00:00:26.188 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:26.188 NVME_CMB=,, 00:00:26.188 NVME_PMR=,, 00:00:26.188 NVME_ZNS=,, 00:00:26.188 NVME_MS=,, 00:00:26.188 NVME_FDP=,, 00:00:26.188 SPDK_VAGRANT_DISTRO=fedora39 00:00:26.188 SPDK_VAGRANT_VMCPU=10 00:00:26.188 SPDK_VAGRANT_VMRAM=12288 00:00:26.188 SPDK_VAGRANT_PROVIDER=libvirt 00:00:26.188 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:26.188 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:26.188 SPDK_OPENSTACK_NETWORK=0 00:00:26.188 VAGRANT_PACKAGE_BOX=0 00:00:26.188 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:26.188 FORCE_DISTRO=true 00:00:26.188 VAGRANT_BOX_VERSION= 00:00:26.188 EXTRA_VAGRANTFILES= 00:00:26.188 NIC_MODEL=e1000 00:00:26.188 00:00:26.188 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:26.188 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:28.722 Bringing machine 'default' up with 'libvirt' provider... 00:00:29.314 ==> default: Creating image (snapshot of base box volume). 00:00:29.314 ==> default: Creating domain with the following settings... 00:00:29.314 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732626797_bc17e12018d1c40a76ac 00:00:29.314 ==> default: -- Domain type: kvm 00:00:29.314 ==> default: -- Cpus: 10 00:00:29.314 ==> default: -- Feature: acpi 00:00:29.314 ==> default: -- Feature: apic 00:00:29.314 ==> default: -- Feature: pae 00:00:29.314 ==> default: -- Memory: 12288M 00:00:29.314 ==> default: -- Memory Backing: hugepages: 00:00:29.314 ==> default: -- Management MAC: 00:00:29.314 ==> default: -- Loader: 00:00:29.314 ==> default: -- Nvram: 00:00:29.314 ==> default: -- Base box: spdk/fedora39 00:00:29.314 ==> default: -- Storage pool: default 00:00:29.314 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732626797_bc17e12018d1c40a76ac.img (20G) 00:00:29.314 ==> default: -- Volume Cache: default 00:00:29.314 ==> default: -- Kernel: 00:00:29.314 ==> default: -- Initrd: 00:00:29.314 ==> default: -- Graphics Type: vnc 00:00:29.314 ==> default: -- Graphics Port: -1 00:00:29.314 ==> default: -- Graphics IP: 127.0.0.1 00:00:29.314 ==> default: -- Graphics Password: Not defined 00:00:29.314 ==> default: -- Video Type: cirrus 00:00:29.314 ==> default: -- Video VRAM: 9216 00:00:29.314 ==> default: -- Sound Type: 00:00:29.314 ==> default: -- Keymap: en-us 00:00:29.314 ==> default: -- TPM Path: 00:00:29.314 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:29.314 ==> default: -- Command line args: 00:00:29.314 ==> default: -> value=-device, 00:00:29.314 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:29.314 ==> default: -> value=-drive, 00:00:29.314 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:29.314 ==> default: -> value=-device, 00:00:29.314 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.314 ==> default: -> value=-device, 00:00:29.314 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:29.314 ==> default: -> value=-drive, 00:00:29.314 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:29.314 ==> default: -> value=-device, 00:00:29.314 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.314 ==> default: -> value=-drive, 00:00:29.314 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:29.314 ==> default: -> value=-device, 00:00:29.314 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.314 ==> default: -> value=-drive, 00:00:29.314 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:29.314 ==> default: -> value=-device, 00:00:29.314 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.314 ==> default: Creating shared folders metadata... 00:00:29.314 ==> default: Starting domain. 00:00:31.219 ==> default: Waiting for domain to get an IP address... 00:00:46.100 ==> default: Waiting for SSH to become available... 00:00:47.038 ==> default: Configuring and enabling network interfaces... 00:00:51.297 default: SSH address: 192.168.121.139:22 00:00:51.297 default: SSH username: vagrant 00:00:51.297 default: SSH auth method: private key 00:00:53.832 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:01.949 ==> default: Mounting SSHFS shared folder... 00:01:02.885 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:02.885 ==> default: Checking Mount.. 00:01:04.262 ==> default: Folder Successfully Mounted! 00:01:04.262 ==> default: Running provisioner: file... 00:01:04.829 default: ~/.gitconfig => .gitconfig 00:01:05.087 00:01:05.087 SUCCESS! 00:01:05.087 00:01:05.087 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:05.087 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:05.087 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:05.087 00:01:05.096 [Pipeline] } 00:01:05.110 [Pipeline] // stage 00:01:05.119 [Pipeline] dir 00:01:05.120 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:05.121 [Pipeline] { 00:01:05.134 [Pipeline] catchError 00:01:05.136 [Pipeline] { 00:01:05.149 [Pipeline] sh 00:01:05.428 + vagrant ssh-config --host vagrant 00:01:05.428 + sed -ne /^Host/,$p 00:01:05.428 + tee ssh_conf 00:01:08.713 Host vagrant 00:01:08.713 HostName 192.168.121.139 00:01:08.713 User vagrant 00:01:08.713 Port 22 00:01:08.713 UserKnownHostsFile /dev/null 00:01:08.713 StrictHostKeyChecking no 00:01:08.713 PasswordAuthentication no 00:01:08.714 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:08.714 IdentitiesOnly yes 00:01:08.714 LogLevel FATAL 00:01:08.714 ForwardAgent yes 00:01:08.714 ForwardX11 yes 00:01:08.714 00:01:08.727 [Pipeline] withEnv 00:01:08.730 [Pipeline] { 00:01:08.743 [Pipeline] sh 00:01:09.020 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:09.021 source /etc/os-release 00:01:09.021 [[ -e /image.version ]] && img=$(< /image.version) 00:01:09.021 # Minimal, systemd-like check. 00:01:09.021 if [[ -e /.dockerenv ]]; then 00:01:09.021 # Clear garbage from the node's name: 00:01:09.021 # agt-er_autotest_547-896 -> autotest_547-896 00:01:09.021 # $HOSTNAME is the actual container id 00:01:09.021 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:09.021 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:09.021 # We can assume this is a mount from a host where container is running, 00:01:09.021 # so fetch its hostname to easily identify the target swarm worker. 00:01:09.021 container="$(< /etc/hostname) ($agent)" 00:01:09.021 else 00:01:09.021 # Fallback 00:01:09.021 container=$agent 00:01:09.021 fi 00:01:09.021 fi 00:01:09.021 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:09.021 00:01:09.291 [Pipeline] } 00:01:09.306 [Pipeline] // withEnv 00:01:09.313 [Pipeline] setCustomBuildProperty 00:01:09.326 [Pipeline] stage 00:01:09.328 [Pipeline] { (Tests) 00:01:09.343 [Pipeline] sh 00:01:09.622 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:09.635 [Pipeline] sh 00:01:09.913 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:10.227 [Pipeline] timeout 00:01:10.227 Timeout set to expire in 1 hr 30 min 00:01:10.229 [Pipeline] { 00:01:10.241 [Pipeline] sh 00:01:10.519 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:11.086 HEAD is now at a9e1e4309 nvmf: discovery log page updation change 00:01:11.099 [Pipeline] sh 00:01:11.380 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:11.650 [Pipeline] sh 00:01:11.928 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:11.942 [Pipeline] sh 00:01:12.220 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:12.479 ++ readlink -f spdk_repo 00:01:12.479 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:12.479 + [[ -n /home/vagrant/spdk_repo ]] 00:01:12.479 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:12.479 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:12.479 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:12.479 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:12.479 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:12.479 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:12.479 + cd /home/vagrant/spdk_repo 00:01:12.479 + source /etc/os-release 00:01:12.479 ++ NAME='Fedora Linux' 00:01:12.479 ++ VERSION='39 (Cloud Edition)' 00:01:12.479 ++ ID=fedora 00:01:12.479 ++ VERSION_ID=39 00:01:12.479 ++ VERSION_CODENAME= 00:01:12.479 ++ PLATFORM_ID=platform:f39 00:01:12.479 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:12.479 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:12.479 ++ LOGO=fedora-logo-icon 00:01:12.479 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:12.479 ++ HOME_URL=https://fedoraproject.org/ 00:01:12.479 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:12.479 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:12.479 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:12.479 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:12.479 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:12.479 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:12.479 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:12.479 ++ SUPPORT_END=2024-11-12 00:01:12.479 ++ VARIANT='Cloud Edition' 00:01:12.479 ++ VARIANT_ID=cloud 00:01:12.479 + uname -a 00:01:12.479 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:12.479 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:12.738 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:12.738 Hugepages 00:01:12.738 node hugesize free / total 00:01:12.738 node0 1048576kB 0 / 0 00:01:12.738 node0 2048kB 0 / 0 00:01:12.738 00:01:12.738 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:12.996 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:12.996 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:12.996 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:12.996 + rm -f /tmp/spdk-ld-path 00:01:12.996 + source autorun-spdk.conf 00:01:12.996 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.996 ++ SPDK_RUN_ASAN=1 00:01:12.996 ++ SPDK_RUN_UBSAN=1 00:01:12.996 ++ SPDK_TEST_RAID=1 00:01:12.996 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.996 ++ RUN_NIGHTLY=0 00:01:12.996 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:12.996 + [[ -n '' ]] 00:01:12.996 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:12.996 + for M in /var/spdk/build-*-manifest.txt 00:01:12.996 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:12.996 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:12.996 + for M in /var/spdk/build-*-manifest.txt 00:01:12.996 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:12.996 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:12.996 + for M in /var/spdk/build-*-manifest.txt 00:01:12.996 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:12.996 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:12.996 ++ uname 00:01:12.996 + [[ Linux == \L\i\n\u\x ]] 00:01:12.996 + sudo dmesg -T 00:01:12.996 + sudo dmesg --clear 00:01:12.996 + dmesg_pid=5208 00:01:12.996 + [[ Fedora Linux == FreeBSD ]] 00:01:12.996 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.996 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.996 + sudo dmesg -Tw 00:01:12.996 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:12.996 + [[ -x /usr/src/fio-static/fio ]] 00:01:12.996 + export FIO_BIN=/usr/src/fio-static/fio 00:01:12.996 + FIO_BIN=/usr/src/fio-static/fio 00:01:12.996 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:12.996 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:12.996 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:12.996 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.996 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.996 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:12.996 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.996 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.996 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:12.996 13:14:01 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:12.996 13:14:01 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:12.996 13:14:01 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.996 13:14:01 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:12.996 13:14:01 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:12.996 13:14:01 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:12.996 13:14:01 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.996 13:14:01 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:12.996 13:14:01 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:12.996 13:14:01 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:13.255 13:14:01 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:13.255 13:14:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:13.255 13:14:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:13.255 13:14:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:13.255 13:14:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:13.255 13:14:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:13.255 13:14:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.255 13:14:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.255 13:14:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.255 13:14:01 -- paths/export.sh@5 -- $ export PATH 00:01:13.255 13:14:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.255 13:14:01 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:13.255 13:14:01 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:13.255 13:14:01 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732626841.XXXXXX 00:01:13.255 13:14:01 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732626841.u7W5Q8 00:01:13.255 13:14:01 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:13.255 13:14:01 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:13.255 13:14:01 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:13.255 13:14:01 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:13.255 13:14:01 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.255 13:14:01 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:13.256 13:14:01 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:13.256 13:14:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.256 13:14:01 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:13.256 13:14:01 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:13.256 13:14:01 -- pm/common@17 -- $ local monitor 00:01:13.256 13:14:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.256 13:14:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.256 13:14:01 -- pm/common@25 -- $ sleep 1 00:01:13.256 13:14:01 -- pm/common@21 -- $ date +%s 00:01:13.256 13:14:01 -- pm/common@21 -- $ date +%s 00:01:13.256 13:14:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732626841 00:01:13.256 13:14:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732626841 00:01:13.256 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732626841_collect-cpu-load.pm.log 00:01:13.256 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732626841_collect-vmstat.pm.log 00:01:14.191 13:14:02 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:14.191 13:14:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.191 13:14:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.191 13:14:02 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:14.191 13:14:02 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.191 Tue Nov 26 01:14:02 PM UTC 2024 00:01:14.191 13:14:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.191 v25.01-pre-241-ga9e1e4309 00:01:14.191 13:14:02 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:14.191 13:14:02 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:14.191 13:14:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:14.191 13:14:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:14.191 13:14:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.191 ************************************ 00:01:14.191 START TEST asan 00:01:14.191 ************************************ 00:01:14.191 using asan 00:01:14.191 13:14:02 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:14.191 00:01:14.191 real 0m0.000s 00:01:14.191 user 0m0.000s 00:01:14.191 sys 0m0.000s 00:01:14.191 13:14:02 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:14.191 ************************************ 00:01:14.191 END TEST asan 00:01:14.191 ************************************ 00:01:14.191 13:14:02 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.192 13:14:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.192 13:14:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.192 13:14:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:14.192 13:14:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:14.192 13:14:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.192 ************************************ 00:01:14.192 START TEST ubsan 00:01:14.192 ************************************ 00:01:14.192 using ubsan 00:01:14.192 13:14:02 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:14.192 00:01:14.192 real 0m0.000s 00:01:14.192 user 0m0.000s 00:01:14.192 sys 0m0.000s 00:01:14.192 ************************************ 00:01:14.192 END TEST ubsan 00:01:14.192 ************************************ 00:01:14.192 13:14:02 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:14.192 13:14:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.192 13:14:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:14.192 13:14:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:14.192 13:14:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:14.192 13:14:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:14.192 13:14:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:14.192 13:14:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:14.192 13:14:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:14.451 13:14:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:14.451 13:14:02 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:14.451 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:14.451 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:14.711 Using 'verbs' RDMA provider 00:01:27.848 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:42.736 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:42.736 Creating mk/config.mk...done. 00:01:42.736 Creating mk/cc.flags.mk...done. 00:01:42.736 Type 'make' to build. 00:01:42.736 13:14:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:42.736 13:14:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:42.736 13:14:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:42.736 13:14:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.736 ************************************ 00:01:42.736 START TEST make 00:01:42.736 ************************************ 00:01:42.736 13:14:29 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:42.736 make[1]: Nothing to be done for 'all'. 00:01:54.939 The Meson build system 00:01:54.939 Version: 1.5.0 00:01:54.939 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:54.939 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:54.939 Build type: native build 00:01:54.939 Program cat found: YES (/usr/bin/cat) 00:01:54.939 Project name: DPDK 00:01:54.939 Project version: 24.03.0 00:01:54.939 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:54.939 C linker for the host machine: cc ld.bfd 2.40-14 00:01:54.939 Host machine cpu family: x86_64 00:01:54.939 Host machine cpu: x86_64 00:01:54.939 Message: ## Building in Developer Mode ## 00:01:54.939 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:54.939 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:54.939 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:54.939 Program python3 found: YES (/usr/bin/python3) 00:01:54.939 Program cat found: YES (/usr/bin/cat) 00:01:54.939 Compiler for C supports arguments -march=native: YES 00:01:54.939 Checking for size of "void *" : 8 00:01:54.939 Checking for size of "void *" : 8 (cached) 00:01:54.939 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:54.939 Library m found: YES 00:01:54.939 Library numa found: YES 00:01:54.939 Has header "numaif.h" : YES 00:01:54.939 Library fdt found: NO 00:01:54.939 Library execinfo found: NO 00:01:54.939 Has header "execinfo.h" : YES 00:01:54.939 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:54.939 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:54.939 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:54.939 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:54.939 Run-time dependency openssl found: YES 3.1.1 00:01:54.939 Run-time dependency libpcap found: YES 1.10.4 00:01:54.939 Has header "pcap.h" with dependency libpcap: YES 00:01:54.939 Compiler for C supports arguments -Wcast-qual: YES 00:01:54.939 Compiler for C supports arguments -Wdeprecated: YES 00:01:54.939 Compiler for C supports arguments -Wformat: YES 00:01:54.939 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:54.940 Compiler for C supports arguments -Wformat-security: NO 00:01:54.940 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.940 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:54.940 Compiler for C supports arguments -Wnested-externs: YES 00:01:54.940 Compiler for C supports arguments -Wold-style-definition: YES 00:01:54.940 Compiler for C supports arguments -Wpointer-arith: YES 00:01:54.940 Compiler for C supports arguments -Wsign-compare: YES 00:01:54.940 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:54.940 Compiler for C supports arguments -Wundef: YES 00:01:54.940 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.940 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:54.940 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:54.940 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.940 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:54.940 Program objdump found: YES (/usr/bin/objdump) 00:01:54.940 Compiler for C supports arguments -mavx512f: YES 00:01:54.940 Checking if "AVX512 checking" compiles: YES 00:01:54.940 Fetching value of define "__SSE4_2__" : 1 00:01:54.940 Fetching value of define "__AES__" : 1 00:01:54.940 Fetching value of define "__AVX__" : 1 00:01:54.940 Fetching value of define "__AVX2__" : 1 00:01:54.940 Fetching value of define "__AVX512BW__" : (undefined) 00:01:54.940 Fetching value of define "__AVX512CD__" : (undefined) 00:01:54.940 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:54.940 Fetching value of define "__AVX512F__" : (undefined) 00:01:54.940 Fetching value of define "__AVX512VL__" : (undefined) 00:01:54.940 Fetching value of define "__PCLMUL__" : 1 00:01:54.940 Fetching value of define "__RDRND__" : 1 00:01:54.940 Fetching value of define "__RDSEED__" : 1 00:01:54.940 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:54.940 Fetching value of define "__znver1__" : (undefined) 00:01:54.940 Fetching value of define "__znver2__" : (undefined) 00:01:54.940 Fetching value of define "__znver3__" : (undefined) 00:01:54.940 Fetching value of define "__znver4__" : (undefined) 00:01:54.940 Library asan found: YES 00:01:54.940 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:54.940 Message: lib/log: Defining dependency "log" 00:01:54.940 Message: lib/kvargs: Defining dependency "kvargs" 00:01:54.940 Message: lib/telemetry: Defining dependency "telemetry" 00:01:54.940 Library rt found: YES 00:01:54.940 Checking for function "getentropy" : NO 00:01:54.940 Message: lib/eal: Defining dependency "eal" 00:01:54.940 Message: lib/ring: Defining dependency "ring" 00:01:54.940 Message: lib/rcu: Defining dependency "rcu" 00:01:54.940 Message: lib/mempool: Defining dependency "mempool" 00:01:54.940 Message: lib/mbuf: Defining dependency "mbuf" 00:01:54.940 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:54.940 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:54.940 Compiler for C supports arguments -mpclmul: YES 00:01:54.940 Compiler for C supports arguments -maes: YES 00:01:54.940 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:54.940 Compiler for C supports arguments -mavx512bw: YES 00:01:54.940 Compiler for C supports arguments -mavx512dq: YES 00:01:54.940 Compiler for C supports arguments -mavx512vl: YES 00:01:54.940 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:54.940 Compiler for C supports arguments -mavx2: YES 00:01:54.940 Compiler for C supports arguments -mavx: YES 00:01:54.940 Message: lib/net: Defining dependency "net" 00:01:54.940 Message: lib/meter: Defining dependency "meter" 00:01:54.940 Message: lib/ethdev: Defining dependency "ethdev" 00:01:54.940 Message: lib/pci: Defining dependency "pci" 00:01:54.940 Message: lib/cmdline: Defining dependency "cmdline" 00:01:54.940 Message: lib/hash: Defining dependency "hash" 00:01:54.940 Message: lib/timer: Defining dependency "timer" 00:01:54.940 Message: lib/compressdev: Defining dependency "compressdev" 00:01:54.940 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:54.940 Message: lib/dmadev: Defining dependency "dmadev" 00:01:54.940 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:54.940 Message: lib/power: Defining dependency "power" 00:01:54.940 Message: lib/reorder: Defining dependency "reorder" 00:01:54.940 Message: lib/security: Defining dependency "security" 00:01:54.940 Has header "linux/userfaultfd.h" : YES 00:01:54.940 Has header "linux/vduse.h" : YES 00:01:54.940 Message: lib/vhost: Defining dependency "vhost" 00:01:54.940 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:54.940 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:54.940 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:54.940 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:54.940 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:54.940 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:54.940 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:54.940 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:54.940 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:54.940 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:54.940 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:54.940 Configuring doxy-api-html.conf using configuration 00:01:54.940 Configuring doxy-api-man.conf using configuration 00:01:54.940 Program mandb found: YES (/usr/bin/mandb) 00:01:54.940 Program sphinx-build found: NO 00:01:54.940 Configuring rte_build_config.h using configuration 00:01:54.940 Message: 00:01:54.940 ================= 00:01:54.940 Applications Enabled 00:01:54.940 ================= 00:01:54.940 00:01:54.940 apps: 00:01:54.940 00:01:54.940 00:01:54.940 Message: 00:01:54.940 ================= 00:01:54.940 Libraries Enabled 00:01:54.940 ================= 00:01:54.940 00:01:54.940 libs: 00:01:54.940 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:54.940 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:54.940 cryptodev, dmadev, power, reorder, security, vhost, 00:01:54.940 00:01:54.940 Message: 00:01:54.940 =============== 00:01:54.940 Drivers Enabled 00:01:54.940 =============== 00:01:54.940 00:01:54.940 common: 00:01:54.940 00:01:54.940 bus: 00:01:54.940 pci, vdev, 00:01:54.940 mempool: 00:01:54.940 ring, 00:01:54.940 dma: 00:01:54.940 00:01:54.940 net: 00:01:54.940 00:01:54.940 crypto: 00:01:54.940 00:01:54.940 compress: 00:01:54.940 00:01:54.940 vdpa: 00:01:54.940 00:01:54.940 00:01:54.940 Message: 00:01:54.940 ================= 00:01:54.940 Content Skipped 00:01:54.940 ================= 00:01:54.940 00:01:54.940 apps: 00:01:54.940 dumpcap: explicitly disabled via build config 00:01:54.940 graph: explicitly disabled via build config 00:01:54.940 pdump: explicitly disabled via build config 00:01:54.940 proc-info: explicitly disabled via build config 00:01:54.940 test-acl: explicitly disabled via build config 00:01:54.940 test-bbdev: explicitly disabled via build config 00:01:54.940 test-cmdline: explicitly disabled via build config 00:01:54.940 test-compress-perf: explicitly disabled via build config 00:01:54.940 test-crypto-perf: explicitly disabled via build config 00:01:54.940 test-dma-perf: explicitly disabled via build config 00:01:54.940 test-eventdev: explicitly disabled via build config 00:01:54.940 test-fib: explicitly disabled via build config 00:01:54.940 test-flow-perf: explicitly disabled via build config 00:01:54.940 test-gpudev: explicitly disabled via build config 00:01:54.940 test-mldev: explicitly disabled via build config 00:01:54.940 test-pipeline: explicitly disabled via build config 00:01:54.940 test-pmd: explicitly disabled via build config 00:01:54.940 test-regex: explicitly disabled via build config 00:01:54.940 test-sad: explicitly disabled via build config 00:01:54.940 test-security-perf: explicitly disabled via build config 00:01:54.940 00:01:54.940 libs: 00:01:54.940 argparse: explicitly disabled via build config 00:01:54.940 metrics: explicitly disabled via build config 00:01:54.940 acl: explicitly disabled via build config 00:01:54.940 bbdev: explicitly disabled via build config 00:01:54.940 bitratestats: explicitly disabled via build config 00:01:54.940 bpf: explicitly disabled via build config 00:01:54.940 cfgfile: explicitly disabled via build config 00:01:54.940 distributor: explicitly disabled via build config 00:01:54.940 efd: explicitly disabled via build config 00:01:54.940 eventdev: explicitly disabled via build config 00:01:54.940 dispatcher: explicitly disabled via build config 00:01:54.940 gpudev: explicitly disabled via build config 00:01:54.940 gro: explicitly disabled via build config 00:01:54.940 gso: explicitly disabled via build config 00:01:54.940 ip_frag: explicitly disabled via build config 00:01:54.940 jobstats: explicitly disabled via build config 00:01:54.940 latencystats: explicitly disabled via build config 00:01:54.940 lpm: explicitly disabled via build config 00:01:54.940 member: explicitly disabled via build config 00:01:54.940 pcapng: explicitly disabled via build config 00:01:54.940 rawdev: explicitly disabled via build config 00:01:54.940 regexdev: explicitly disabled via build config 00:01:54.940 mldev: explicitly disabled via build config 00:01:54.940 rib: explicitly disabled via build config 00:01:54.940 sched: explicitly disabled via build config 00:01:54.940 stack: explicitly disabled via build config 00:01:54.940 ipsec: explicitly disabled via build config 00:01:54.940 pdcp: explicitly disabled via build config 00:01:54.940 fib: explicitly disabled via build config 00:01:54.940 port: explicitly disabled via build config 00:01:54.940 pdump: explicitly disabled via build config 00:01:54.940 table: explicitly disabled via build config 00:01:54.940 pipeline: explicitly disabled via build config 00:01:54.940 graph: explicitly disabled via build config 00:01:54.940 node: explicitly disabled via build config 00:01:54.940 00:01:54.940 drivers: 00:01:54.940 common/cpt: not in enabled drivers build config 00:01:54.940 common/dpaax: not in enabled drivers build config 00:01:54.940 common/iavf: not in enabled drivers build config 00:01:54.940 common/idpf: not in enabled drivers build config 00:01:54.940 common/ionic: not in enabled drivers build config 00:01:54.940 common/mvep: not in enabled drivers build config 00:01:54.940 common/octeontx: not in enabled drivers build config 00:01:54.940 bus/auxiliary: not in enabled drivers build config 00:01:54.941 bus/cdx: not in enabled drivers build config 00:01:54.941 bus/dpaa: not in enabled drivers build config 00:01:54.941 bus/fslmc: not in enabled drivers build config 00:01:54.941 bus/ifpga: not in enabled drivers build config 00:01:54.941 bus/platform: not in enabled drivers build config 00:01:54.941 bus/uacce: not in enabled drivers build config 00:01:54.941 bus/vmbus: not in enabled drivers build config 00:01:54.941 common/cnxk: not in enabled drivers build config 00:01:54.941 common/mlx5: not in enabled drivers build config 00:01:54.941 common/nfp: not in enabled drivers build config 00:01:54.941 common/nitrox: not in enabled drivers build config 00:01:54.941 common/qat: not in enabled drivers build config 00:01:54.941 common/sfc_efx: not in enabled drivers build config 00:01:54.941 mempool/bucket: not in enabled drivers build config 00:01:54.941 mempool/cnxk: not in enabled drivers build config 00:01:54.941 mempool/dpaa: not in enabled drivers build config 00:01:54.941 mempool/dpaa2: not in enabled drivers build config 00:01:54.941 mempool/octeontx: not in enabled drivers build config 00:01:54.941 mempool/stack: not in enabled drivers build config 00:01:54.941 dma/cnxk: not in enabled drivers build config 00:01:54.941 dma/dpaa: not in enabled drivers build config 00:01:54.941 dma/dpaa2: not in enabled drivers build config 00:01:54.941 dma/hisilicon: not in enabled drivers build config 00:01:54.941 dma/idxd: not in enabled drivers build config 00:01:54.941 dma/ioat: not in enabled drivers build config 00:01:54.941 dma/skeleton: not in enabled drivers build config 00:01:54.941 net/af_packet: not in enabled drivers build config 00:01:54.941 net/af_xdp: not in enabled drivers build config 00:01:54.941 net/ark: not in enabled drivers build config 00:01:54.941 net/atlantic: not in enabled drivers build config 00:01:54.941 net/avp: not in enabled drivers build config 00:01:54.941 net/axgbe: not in enabled drivers build config 00:01:54.941 net/bnx2x: not in enabled drivers build config 00:01:54.941 net/bnxt: not in enabled drivers build config 00:01:54.941 net/bonding: not in enabled drivers build config 00:01:54.941 net/cnxk: not in enabled drivers build config 00:01:54.941 net/cpfl: not in enabled drivers build config 00:01:54.941 net/cxgbe: not in enabled drivers build config 00:01:54.941 net/dpaa: not in enabled drivers build config 00:01:54.941 net/dpaa2: not in enabled drivers build config 00:01:54.941 net/e1000: not in enabled drivers build config 00:01:54.941 net/ena: not in enabled drivers build config 00:01:54.941 net/enetc: not in enabled drivers build config 00:01:54.941 net/enetfec: not in enabled drivers build config 00:01:54.941 net/enic: not in enabled drivers build config 00:01:54.941 net/failsafe: not in enabled drivers build config 00:01:54.941 net/fm10k: not in enabled drivers build config 00:01:54.941 net/gve: not in enabled drivers build config 00:01:54.941 net/hinic: not in enabled drivers build config 00:01:54.941 net/hns3: not in enabled drivers build config 00:01:54.941 net/i40e: not in enabled drivers build config 00:01:54.941 net/iavf: not in enabled drivers build config 00:01:54.941 net/ice: not in enabled drivers build config 00:01:54.941 net/idpf: not in enabled drivers build config 00:01:54.941 net/igc: not in enabled drivers build config 00:01:54.941 net/ionic: not in enabled drivers build config 00:01:54.941 net/ipn3ke: not in enabled drivers build config 00:01:54.941 net/ixgbe: not in enabled drivers build config 00:01:54.941 net/mana: not in enabled drivers build config 00:01:54.941 net/memif: not in enabled drivers build config 00:01:54.941 net/mlx4: not in enabled drivers build config 00:01:54.941 net/mlx5: not in enabled drivers build config 00:01:54.941 net/mvneta: not in enabled drivers build config 00:01:54.941 net/mvpp2: not in enabled drivers build config 00:01:54.941 net/netvsc: not in enabled drivers build config 00:01:54.941 net/nfb: not in enabled drivers build config 00:01:54.941 net/nfp: not in enabled drivers build config 00:01:54.941 net/ngbe: not in enabled drivers build config 00:01:54.941 net/null: not in enabled drivers build config 00:01:54.941 net/octeontx: not in enabled drivers build config 00:01:54.941 net/octeon_ep: not in enabled drivers build config 00:01:54.941 net/pcap: not in enabled drivers build config 00:01:54.941 net/pfe: not in enabled drivers build config 00:01:54.941 net/qede: not in enabled drivers build config 00:01:54.941 net/ring: not in enabled drivers build config 00:01:54.941 net/sfc: not in enabled drivers build config 00:01:54.941 net/softnic: not in enabled drivers build config 00:01:54.941 net/tap: not in enabled drivers build config 00:01:54.941 net/thunderx: not in enabled drivers build config 00:01:54.941 net/txgbe: not in enabled drivers build config 00:01:54.941 net/vdev_netvsc: not in enabled drivers build config 00:01:54.941 net/vhost: not in enabled drivers build config 00:01:54.941 net/virtio: not in enabled drivers build config 00:01:54.941 net/vmxnet3: not in enabled drivers build config 00:01:54.941 raw/*: missing internal dependency, "rawdev" 00:01:54.941 crypto/armv8: not in enabled drivers build config 00:01:54.941 crypto/bcmfs: not in enabled drivers build config 00:01:54.941 crypto/caam_jr: not in enabled drivers build config 00:01:54.941 crypto/ccp: not in enabled drivers build config 00:01:54.941 crypto/cnxk: not in enabled drivers build config 00:01:54.941 crypto/dpaa_sec: not in enabled drivers build config 00:01:54.941 crypto/dpaa2_sec: not in enabled drivers build config 00:01:54.941 crypto/ipsec_mb: not in enabled drivers build config 00:01:54.941 crypto/mlx5: not in enabled drivers build config 00:01:54.941 crypto/mvsam: not in enabled drivers build config 00:01:54.941 crypto/nitrox: not in enabled drivers build config 00:01:54.941 crypto/null: not in enabled drivers build config 00:01:54.941 crypto/octeontx: not in enabled drivers build config 00:01:54.941 crypto/openssl: not in enabled drivers build config 00:01:54.941 crypto/scheduler: not in enabled drivers build config 00:01:54.941 crypto/uadk: not in enabled drivers build config 00:01:54.941 crypto/virtio: not in enabled drivers build config 00:01:54.941 compress/isal: not in enabled drivers build config 00:01:54.941 compress/mlx5: not in enabled drivers build config 00:01:54.941 compress/nitrox: not in enabled drivers build config 00:01:54.941 compress/octeontx: not in enabled drivers build config 00:01:54.941 compress/zlib: not in enabled drivers build config 00:01:54.941 regex/*: missing internal dependency, "regexdev" 00:01:54.941 ml/*: missing internal dependency, "mldev" 00:01:54.941 vdpa/ifc: not in enabled drivers build config 00:01:54.941 vdpa/mlx5: not in enabled drivers build config 00:01:54.941 vdpa/nfp: not in enabled drivers build config 00:01:54.941 vdpa/sfc: not in enabled drivers build config 00:01:54.941 event/*: missing internal dependency, "eventdev" 00:01:54.941 baseband/*: missing internal dependency, "bbdev" 00:01:54.941 gpu/*: missing internal dependency, "gpudev" 00:01:54.941 00:01:54.941 00:01:54.941 Build targets in project: 85 00:01:54.941 00:01:54.941 DPDK 24.03.0 00:01:54.941 00:01:54.941 User defined options 00:01:54.941 buildtype : debug 00:01:54.941 default_library : shared 00:01:54.941 libdir : lib 00:01:54.941 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:54.941 b_sanitize : address 00:01:54.941 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:54.941 c_link_args : 00:01:54.941 cpu_instruction_set: native 00:01:54.941 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:54.941 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:54.941 enable_docs : false 00:01:54.941 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:54.941 enable_kmods : false 00:01:54.941 max_lcores : 128 00:01:54.941 tests : false 00:01:54.941 00:01:54.941 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:54.941 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:54.941 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:54.941 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:54.941 [3/268] Linking static target lib/librte_kvargs.a 00:01:54.941 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:54.941 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:54.941 [6/268] Linking static target lib/librte_log.a 00:01:54.941 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:54.941 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:54.941 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.941 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:54.941 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:54.941 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:54.941 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:54.941 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:54.941 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:54.942 [16/268] Linking static target lib/librte_telemetry.a 00:01:54.942 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.942 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:54.942 [19/268] Linking target lib/librte_log.so.24.1 00:01:54.942 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:55.200 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:55.459 [22/268] Linking target lib/librte_kvargs.so.24.1 00:01:55.459 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:55.459 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:55.459 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:55.459 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:55.717 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:55.717 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:55.717 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.717 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:55.717 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.976 [32/268] Linking target lib/librte_telemetry.so.24.1 00:01:55.976 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:55.976 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.976 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:56.234 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:56.234 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:56.493 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:56.493 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:56.493 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:56.493 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:56.493 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:56.493 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:56.493 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:56.768 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:57.042 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:57.042 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:57.042 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:57.301 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:57.301 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:57.301 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:57.559 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:57.559 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:57.819 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:57.819 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:57.819 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.078 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:58.078 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:58.078 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:58.337 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:58.337 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:58.337 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:58.337 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:58.596 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:58.596 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:58.596 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:58.855 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:58.855 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:58.855 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:58.855 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:58.855 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:59.114 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:59.114 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:59.114 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:59.114 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:59.114 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:59.114 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:59.372 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:59.372 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:59.631 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:59.631 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:59.631 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:59.631 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:59.631 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:59.631 [85/268] Linking static target lib/librte_ring.a 00:01:59.890 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:59.890 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:59.890 [88/268] Linking static target lib/librte_eal.a 00:02:00.148 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:00.148 [90/268] Linking static target lib/librte_rcu.a 00:02:00.148 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:00.148 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:00.407 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:00.407 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.666 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:00.666 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:00.666 [97/268] Linking static target lib/librte_mempool.a 00:02:00.666 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:00.666 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.924 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:00.924 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:00.924 [102/268] Linking static target lib/librte_mbuf.a 00:02:01.183 [103/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:01.183 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:01.183 [105/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:01.183 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:01.441 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:01.441 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:01.441 [109/268] Linking static target lib/librte_net.a 00:02:01.441 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:01.700 [111/268] Linking static target lib/librte_meter.a 00:02:01.700 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:01.700 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.958 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:01.958 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.958 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:01.958 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.217 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.217 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:02.476 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:02.476 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:02.735 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:02.995 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:02.995 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:03.253 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:03.253 [126/268] Linking static target lib/librte_pci.a 00:02:03.253 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:03.254 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:03.254 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:03.512 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:03.512 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:03.512 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:03.512 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.512 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:03.770 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:03.770 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:03.770 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:03.770 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:03.770 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:03.770 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:03.770 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:04.029 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:04.029 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:04.029 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:04.029 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:04.288 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:04.288 [147/268] Linking static target lib/librte_cmdline.a 00:02:04.547 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:04.805 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:04.805 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:04.805 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:05.064 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:05.064 [153/268] Linking static target lib/librte_timer.a 00:02:05.064 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:05.064 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:05.324 [156/268] Linking static target lib/librte_ethdev.a 00:02:05.324 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:05.324 [158/268] Linking static target lib/librte_hash.a 00:02:05.583 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:05.583 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:05.583 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.583 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:05.842 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:05.842 [164/268] Linking static target lib/librte_compressdev.a 00:02:05.842 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:05.842 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.100 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:06.100 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:06.359 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:06.359 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:06.359 [171/268] Linking static target lib/librte_dmadev.a 00:02:06.618 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:06.618 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:06.618 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.618 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:06.877 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.877 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:07.136 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:07.137 [179/268] Linking static target lib/librte_cryptodev.a 00:02:07.137 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:07.137 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:07.396 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:07.396 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.396 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:07.396 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:07.396 [186/268] Linking static target lib/librte_power.a 00:02:07.655 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:07.655 [188/268] Linking static target lib/librte_reorder.a 00:02:07.914 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:07.914 [190/268] Linking static target lib/librte_security.a 00:02:07.914 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:08.173 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:08.173 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:08.433 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.691 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:08.691 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.691 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.952 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:09.211 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:09.469 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:09.469 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:09.469 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:09.729 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.729 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:09.729 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:09.987 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:10.246 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:10.246 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:10.246 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:10.505 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:10.505 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:10.505 [212/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:10.505 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:10.766 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:10.766 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:10.766 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:10.766 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:10.766 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:10.766 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:10.766 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:10.766 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:10.766 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:10.766 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:10.766 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:10.766 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:11.027 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.286 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.855 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.114 [229/268] Linking target lib/librte_eal.so.24.1 00:02:12.114 [230/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:12.114 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:12.114 [232/268] Linking target lib/librte_meter.so.24.1 00:02:12.114 [233/268] Linking target lib/librte_pci.so.24.1 00:02:12.114 [234/268] Linking target lib/librte_timer.so.24.1 00:02:12.114 [235/268] Linking target lib/librte_ring.so.24.1 00:02:12.114 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:12.374 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:12.374 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:12.374 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:12.374 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:12.374 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:12.374 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:12.374 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:12.374 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:12.374 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:12.634 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:12.634 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:12.634 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:12.634 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:12.634 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:12.893 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:12.893 [252/268] Linking target lib/librte_net.so.24.1 00:02:12.893 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:12.893 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:12.893 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:12.893 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:12.893 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:12.893 [258/268] Linking target lib/librte_hash.so.24.1 00:02:12.893 [259/268] Linking target lib/librte_security.so.24.1 00:02:13.152 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:13.412 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.412 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:13.671 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:13.671 [264/268] Linking target lib/librte_power.so.24.1 00:02:15.576 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:15.835 [266/268] Linking static target lib/librte_vhost.a 00:02:17.269 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.552 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:17.553 INFO: autodetecting backend as ninja 00:02:17.553 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:35.647 CC lib/ut/ut.o 00:02:35.647 CC lib/ut_mock/mock.o 00:02:35.647 CC lib/log/log.o 00:02:35.647 CC lib/log/log_deprecated.o 00:02:35.647 CC lib/log/log_flags.o 00:02:35.647 LIB libspdk_ut_mock.a 00:02:35.647 LIB libspdk_ut.a 00:02:35.647 LIB libspdk_log.a 00:02:35.647 SO libspdk_ut_mock.so.6.0 00:02:35.647 SO libspdk_ut.so.2.0 00:02:35.647 SO libspdk_log.so.7.1 00:02:35.647 SYMLINK libspdk_ut_mock.so 00:02:35.647 SYMLINK libspdk_ut.so 00:02:35.647 SYMLINK libspdk_log.so 00:02:35.647 CC lib/util/base64.o 00:02:35.647 CC lib/ioat/ioat.o 00:02:35.647 CC lib/dma/dma.o 00:02:35.647 CC lib/util/bit_array.o 00:02:35.647 CC lib/util/cpuset.o 00:02:35.647 CC lib/util/crc32.o 00:02:35.647 CC lib/util/crc16.o 00:02:35.647 CC lib/util/crc32c.o 00:02:35.647 CXX lib/trace_parser/trace.o 00:02:35.647 CC lib/vfio_user/host/vfio_user_pci.o 00:02:35.647 CC lib/util/crc32_ieee.o 00:02:35.647 CC lib/vfio_user/host/vfio_user.o 00:02:35.647 CC lib/util/crc64.o 00:02:35.647 CC lib/util/dif.o 00:02:35.647 LIB libspdk_dma.a 00:02:35.647 CC lib/util/fd.o 00:02:35.647 SO libspdk_dma.so.5.0 00:02:35.647 CC lib/util/fd_group.o 00:02:35.647 CC lib/util/file.o 00:02:35.647 CC lib/util/hexlify.o 00:02:35.647 SYMLINK libspdk_dma.so 00:02:35.647 CC lib/util/iov.o 00:02:35.647 LIB libspdk_ioat.a 00:02:35.647 CC lib/util/math.o 00:02:35.647 SO libspdk_ioat.so.7.0 00:02:35.647 LIB libspdk_vfio_user.a 00:02:35.647 CC lib/util/net.o 00:02:35.647 CC lib/util/pipe.o 00:02:35.647 SO libspdk_vfio_user.so.5.0 00:02:35.647 SYMLINK libspdk_ioat.so 00:02:35.647 CC lib/util/strerror_tls.o 00:02:35.647 CC lib/util/string.o 00:02:35.647 SYMLINK libspdk_vfio_user.so 00:02:35.647 CC lib/util/uuid.o 00:02:35.647 CC lib/util/xor.o 00:02:35.647 CC lib/util/zipf.o 00:02:35.647 CC lib/util/md5.o 00:02:35.906 LIB libspdk_util.a 00:02:35.906 SO libspdk_util.so.10.1 00:02:35.906 LIB libspdk_trace_parser.a 00:02:35.906 SO libspdk_trace_parser.so.6.0 00:02:36.165 SYMLINK libspdk_util.so 00:02:36.165 SYMLINK libspdk_trace_parser.so 00:02:36.165 CC lib/idxd/idxd.o 00:02:36.165 CC lib/rdma_utils/rdma_utils.o 00:02:36.165 CC lib/idxd/idxd_user.o 00:02:36.165 CC lib/json/json_util.o 00:02:36.165 CC lib/json/json_write.o 00:02:36.165 CC lib/idxd/idxd_kernel.o 00:02:36.165 CC lib/json/json_parse.o 00:02:36.165 CC lib/env_dpdk/env.o 00:02:36.165 CC lib/vmd/vmd.o 00:02:36.165 CC lib/conf/conf.o 00:02:36.424 CC lib/env_dpdk/memory.o 00:02:36.424 LIB libspdk_conf.a 00:02:36.424 CC lib/env_dpdk/pci.o 00:02:36.424 SO libspdk_conf.so.6.0 00:02:36.424 LIB libspdk_rdma_utils.a 00:02:36.424 CC lib/vmd/led.o 00:02:36.683 CC lib/env_dpdk/init.o 00:02:36.683 SO libspdk_rdma_utils.so.1.0 00:02:36.683 SYMLINK libspdk_conf.so 00:02:36.683 LIB libspdk_json.a 00:02:36.683 CC lib/env_dpdk/threads.o 00:02:36.683 SO libspdk_json.so.6.0 00:02:36.683 SYMLINK libspdk_rdma_utils.so 00:02:36.683 CC lib/env_dpdk/pci_ioat.o 00:02:36.683 SYMLINK libspdk_json.so 00:02:36.683 CC lib/env_dpdk/pci_virtio.o 00:02:36.683 CC lib/env_dpdk/pci_vmd.o 00:02:36.942 CC lib/rdma_provider/common.o 00:02:36.942 CC lib/env_dpdk/pci_idxd.o 00:02:36.942 CC lib/jsonrpc/jsonrpc_server.o 00:02:36.942 CC lib/env_dpdk/pci_event.o 00:02:36.942 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:36.942 CC lib/env_dpdk/sigbus_handler.o 00:02:36.942 LIB libspdk_idxd.a 00:02:36.942 CC lib/env_dpdk/pci_dpdk.o 00:02:36.942 LIB libspdk_vmd.a 00:02:36.942 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:36.942 SO libspdk_idxd.so.12.1 00:02:36.942 SO libspdk_vmd.so.6.0 00:02:37.202 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:37.202 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:37.202 SYMLINK libspdk_vmd.so 00:02:37.202 SYMLINK libspdk_idxd.so 00:02:37.202 CC lib/jsonrpc/jsonrpc_client.o 00:02:37.202 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:37.202 LIB libspdk_rdma_provider.a 00:02:37.202 SO libspdk_rdma_provider.so.7.0 00:02:37.202 SYMLINK libspdk_rdma_provider.so 00:02:37.461 LIB libspdk_jsonrpc.a 00:02:37.461 SO libspdk_jsonrpc.so.6.0 00:02:37.461 SYMLINK libspdk_jsonrpc.so 00:02:37.720 CC lib/rpc/rpc.o 00:02:37.978 LIB libspdk_rpc.a 00:02:37.978 LIB libspdk_env_dpdk.a 00:02:37.978 SO libspdk_rpc.so.6.0 00:02:37.978 SYMLINK libspdk_rpc.so 00:02:37.978 SO libspdk_env_dpdk.so.15.1 00:02:38.238 SYMLINK libspdk_env_dpdk.so 00:02:38.238 CC lib/keyring/keyring_rpc.o 00:02:38.238 CC lib/keyring/keyring.o 00:02:38.238 CC lib/trace/trace.o 00:02:38.238 CC lib/trace/trace_flags.o 00:02:38.238 CC lib/trace/trace_rpc.o 00:02:38.238 CC lib/notify/notify.o 00:02:38.238 CC lib/notify/notify_rpc.o 00:02:38.497 LIB libspdk_notify.a 00:02:38.497 SO libspdk_notify.so.6.0 00:02:38.497 LIB libspdk_keyring.a 00:02:38.497 SYMLINK libspdk_notify.so 00:02:38.497 SO libspdk_keyring.so.2.0 00:02:38.497 LIB libspdk_trace.a 00:02:38.497 SO libspdk_trace.so.11.0 00:02:38.497 SYMLINK libspdk_keyring.so 00:02:38.497 SYMLINK libspdk_trace.so 00:02:38.756 CC lib/thread/thread.o 00:02:38.756 CC lib/thread/iobuf.o 00:02:38.756 CC lib/sock/sock.o 00:02:38.756 CC lib/sock/sock_rpc.o 00:02:39.324 LIB libspdk_sock.a 00:02:39.585 SO libspdk_sock.so.10.0 00:02:39.585 SYMLINK libspdk_sock.so 00:02:39.882 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:39.882 CC lib/nvme/nvme_ctrlr.o 00:02:39.882 CC lib/nvme/nvme_fabric.o 00:02:39.882 CC lib/nvme/nvme_ns.o 00:02:39.882 CC lib/nvme/nvme_ns_cmd.o 00:02:39.882 CC lib/nvme/nvme_pcie.o 00:02:39.882 CC lib/nvme/nvme_qpair.o 00:02:39.882 CC lib/nvme/nvme_pcie_common.o 00:02:39.882 CC lib/nvme/nvme.o 00:02:40.821 CC lib/nvme/nvme_quirks.o 00:02:40.821 CC lib/nvme/nvme_transport.o 00:02:40.821 CC lib/nvme/nvme_discovery.o 00:02:40.821 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:40.821 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:40.821 CC lib/nvme/nvme_tcp.o 00:02:40.821 LIB libspdk_thread.a 00:02:40.821 SO libspdk_thread.so.11.0 00:02:41.080 CC lib/nvme/nvme_opal.o 00:02:41.080 SYMLINK libspdk_thread.so 00:02:41.081 CC lib/nvme/nvme_io_msg.o 00:02:41.081 CC lib/nvme/nvme_poll_group.o 00:02:41.339 CC lib/nvme/nvme_zns.o 00:02:41.339 CC lib/nvme/nvme_stubs.o 00:02:41.339 CC lib/nvme/nvme_auth.o 00:02:41.339 CC lib/nvme/nvme_cuse.o 00:02:41.598 CC lib/nvme/nvme_rdma.o 00:02:41.856 CC lib/accel/accel.o 00:02:41.857 CC lib/blob/blobstore.o 00:02:41.857 CC lib/blob/request.o 00:02:42.115 CC lib/init/json_config.o 00:02:42.115 CC lib/virtio/virtio.o 00:02:42.374 CC lib/init/subsystem.o 00:02:42.374 CC lib/accel/accel_rpc.o 00:02:42.374 CC lib/virtio/virtio_vhost_user.o 00:02:42.374 CC lib/blob/zeroes.o 00:02:42.374 CC lib/accel/accel_sw.o 00:02:42.374 CC lib/init/subsystem_rpc.o 00:02:42.633 CC lib/blob/blob_bs_dev.o 00:02:42.633 CC lib/virtio/virtio_vfio_user.o 00:02:42.633 CC lib/init/rpc.o 00:02:42.633 CC lib/virtio/virtio_pci.o 00:02:42.633 CC lib/fsdev/fsdev.o 00:02:42.892 LIB libspdk_init.a 00:02:42.892 CC lib/fsdev/fsdev_io.o 00:02:42.892 CC lib/fsdev/fsdev_rpc.o 00:02:42.892 SO libspdk_init.so.6.0 00:02:42.892 SYMLINK libspdk_init.so 00:02:43.151 LIB libspdk_virtio.a 00:02:43.151 SO libspdk_virtio.so.7.0 00:02:43.151 CC lib/event/app.o 00:02:43.151 CC lib/event/app_rpc.o 00:02:43.151 CC lib/event/reactor.o 00:02:43.151 CC lib/event/log_rpc.o 00:02:43.151 LIB libspdk_accel.a 00:02:43.151 SYMLINK libspdk_virtio.so 00:02:43.151 CC lib/event/scheduler_static.o 00:02:43.151 SO libspdk_accel.so.16.0 00:02:43.151 LIB libspdk_nvme.a 00:02:43.411 SYMLINK libspdk_accel.so 00:02:43.411 SO libspdk_nvme.so.15.0 00:02:43.411 LIB libspdk_fsdev.a 00:02:43.411 SO libspdk_fsdev.so.2.0 00:02:43.411 CC lib/bdev/bdev.o 00:02:43.411 CC lib/bdev/bdev_rpc.o 00:02:43.411 CC lib/bdev/bdev_zone.o 00:02:43.411 CC lib/bdev/part.o 00:02:43.411 CC lib/bdev/scsi_nvme.o 00:02:43.669 SYMLINK libspdk_fsdev.so 00:02:43.669 SYMLINK libspdk_nvme.so 00:02:43.669 LIB libspdk_event.a 00:02:43.928 SO libspdk_event.so.14.0 00:02:43.928 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:43.928 SYMLINK libspdk_event.so 00:02:44.496 LIB libspdk_fuse_dispatcher.a 00:02:44.496 SO libspdk_fuse_dispatcher.so.1.0 00:02:44.755 SYMLINK libspdk_fuse_dispatcher.so 00:02:46.133 LIB libspdk_blob.a 00:02:46.133 SO libspdk_blob.so.12.0 00:02:46.133 SYMLINK libspdk_blob.so 00:02:46.392 CC lib/lvol/lvol.o 00:02:46.392 CC lib/blobfs/blobfs.o 00:02:46.392 CC lib/blobfs/tree.o 00:02:46.651 LIB libspdk_bdev.a 00:02:46.651 SO libspdk_bdev.so.17.0 00:02:46.651 SYMLINK libspdk_bdev.so 00:02:46.910 CC lib/nvmf/ctrlr.o 00:02:46.910 CC lib/nvmf/ctrlr_discovery.o 00:02:46.910 CC lib/nvmf/ctrlr_bdev.o 00:02:46.910 CC lib/ublk/ublk.o 00:02:46.910 CC lib/nvmf/subsystem.o 00:02:46.910 CC lib/nbd/nbd.o 00:02:46.910 CC lib/ftl/ftl_core.o 00:02:46.910 CC lib/scsi/dev.o 00:02:47.169 CC lib/scsi/lun.o 00:02:47.429 CC lib/ftl/ftl_init.o 00:02:47.429 LIB libspdk_blobfs.a 00:02:47.429 SO libspdk_blobfs.so.11.0 00:02:47.429 CC lib/nbd/nbd_rpc.o 00:02:47.688 SYMLINK libspdk_blobfs.so 00:02:47.688 CC lib/ftl/ftl_layout.o 00:02:47.688 LIB libspdk_lvol.a 00:02:47.688 CC lib/ftl/ftl_debug.o 00:02:47.688 SO libspdk_lvol.so.11.0 00:02:47.688 CC lib/scsi/port.o 00:02:47.688 CC lib/scsi/scsi.o 00:02:47.688 LIB libspdk_nbd.a 00:02:47.688 SYMLINK libspdk_lvol.so 00:02:47.688 CC lib/scsi/scsi_bdev.o 00:02:47.688 SO libspdk_nbd.so.7.0 00:02:47.688 SYMLINK libspdk_nbd.so 00:02:47.688 CC lib/ublk/ublk_rpc.o 00:02:47.688 CC lib/nvmf/nvmf.o 00:02:47.688 CC lib/ftl/ftl_io.o 00:02:47.688 CC lib/scsi/scsi_pr.o 00:02:47.948 CC lib/ftl/ftl_sb.o 00:02:47.948 CC lib/ftl/ftl_l2p.o 00:02:47.948 CC lib/ftl/ftl_l2p_flat.o 00:02:47.948 LIB libspdk_ublk.a 00:02:47.948 SO libspdk_ublk.so.3.0 00:02:47.948 CC lib/nvmf/nvmf_rpc.o 00:02:48.207 SYMLINK libspdk_ublk.so 00:02:48.207 CC lib/nvmf/transport.o 00:02:48.207 CC lib/nvmf/tcp.o 00:02:48.207 CC lib/nvmf/stubs.o 00:02:48.207 CC lib/ftl/ftl_nv_cache.o 00:02:48.207 CC lib/scsi/scsi_rpc.o 00:02:48.207 CC lib/nvmf/mdns_server.o 00:02:48.467 CC lib/scsi/task.o 00:02:48.467 CC lib/nvmf/rdma.o 00:02:48.726 LIB libspdk_scsi.a 00:02:48.726 CC lib/nvmf/auth.o 00:02:48.726 SO libspdk_scsi.so.9.0 00:02:48.726 SYMLINK libspdk_scsi.so 00:02:48.726 CC lib/ftl/ftl_band.o 00:02:48.985 CC lib/ftl/ftl_band_ops.o 00:02:48.985 CC lib/iscsi/conn.o 00:02:48.985 CC lib/vhost/vhost.o 00:02:49.244 CC lib/vhost/vhost_rpc.o 00:02:49.244 CC lib/ftl/ftl_writer.o 00:02:49.244 CC lib/ftl/ftl_rq.o 00:02:49.244 CC lib/ftl/ftl_reloc.o 00:02:49.503 CC lib/ftl/ftl_l2p_cache.o 00:02:49.503 CC lib/ftl/ftl_p2l.o 00:02:49.503 CC lib/ftl/ftl_p2l_log.o 00:02:49.764 CC lib/ftl/mngt/ftl_mngt.o 00:02:49.764 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:49.764 CC lib/iscsi/init_grp.o 00:02:49.764 CC lib/vhost/vhost_scsi.o 00:02:50.023 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:50.023 CC lib/iscsi/iscsi.o 00:02:50.023 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:50.023 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:50.023 CC lib/vhost/vhost_blk.o 00:02:50.023 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:50.023 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:50.283 CC lib/vhost/rte_vhost_user.o 00:02:50.283 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:50.283 CC lib/iscsi/param.o 00:02:50.283 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:50.283 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:50.542 CC lib/iscsi/portal_grp.o 00:02:50.542 CC lib/iscsi/tgt_node.o 00:02:50.542 CC lib/iscsi/iscsi_subsystem.o 00:02:50.542 CC lib/iscsi/iscsi_rpc.o 00:02:50.811 CC lib/iscsi/task.o 00:02:50.811 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:50.811 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:51.072 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:51.072 CC lib/ftl/utils/ftl_conf.o 00:02:51.072 CC lib/ftl/utils/ftl_md.o 00:02:51.072 CC lib/ftl/utils/ftl_mempool.o 00:02:51.331 CC lib/ftl/utils/ftl_bitmap.o 00:02:51.331 CC lib/ftl/utils/ftl_property.o 00:02:51.331 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:51.331 LIB libspdk_nvmf.a 00:02:51.331 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:51.331 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:51.331 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:51.331 LIB libspdk_vhost.a 00:02:51.331 SO libspdk_nvmf.so.20.0 00:02:51.331 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:51.590 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:51.590 SO libspdk_vhost.so.8.0 00:02:51.590 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:51.590 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:51.590 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:51.590 SYMLINK libspdk_vhost.so 00:02:51.590 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:51.590 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:51.590 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:51.590 LIB libspdk_iscsi.a 00:02:51.590 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:51.590 SYMLINK libspdk_nvmf.so 00:02:51.590 CC lib/ftl/base/ftl_base_dev.o 00:02:51.590 CC lib/ftl/base/ftl_base_bdev.o 00:02:51.849 SO libspdk_iscsi.so.8.0 00:02:51.849 CC lib/ftl/ftl_trace.o 00:02:51.849 SYMLINK libspdk_iscsi.so 00:02:52.108 LIB libspdk_ftl.a 00:02:52.366 SO libspdk_ftl.so.9.0 00:02:52.625 SYMLINK libspdk_ftl.so 00:02:52.883 CC module/env_dpdk/env_dpdk_rpc.o 00:02:52.883 CC module/sock/posix/posix.o 00:02:52.883 CC module/keyring/file/keyring.o 00:02:52.883 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:52.883 CC module/fsdev/aio/fsdev_aio.o 00:02:52.883 CC module/blob/bdev/blob_bdev.o 00:02:52.883 CC module/accel/error/accel_error.o 00:02:52.883 CC module/keyring/linux/keyring.o 00:02:52.883 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:52.883 CC module/accel/ioat/accel_ioat.o 00:02:52.883 LIB libspdk_env_dpdk_rpc.a 00:02:52.883 SO libspdk_env_dpdk_rpc.so.6.0 00:02:53.143 SYMLINK libspdk_env_dpdk_rpc.so 00:02:53.143 CC module/keyring/file/keyring_rpc.o 00:02:53.143 CC module/accel/ioat/accel_ioat_rpc.o 00:02:53.143 CC module/keyring/linux/keyring_rpc.o 00:02:53.143 LIB libspdk_scheduler_dpdk_governor.a 00:02:53.143 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:53.143 LIB libspdk_scheduler_dynamic.a 00:02:53.143 CC module/accel/error/accel_error_rpc.o 00:02:53.143 SO libspdk_scheduler_dynamic.so.4.0 00:02:53.143 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:53.143 LIB libspdk_keyring_linux.a 00:02:53.143 SYMLINK libspdk_scheduler_dynamic.so 00:02:53.143 LIB libspdk_accel_ioat.a 00:02:53.143 LIB libspdk_keyring_file.a 00:02:53.143 LIB libspdk_blob_bdev.a 00:02:53.143 SO libspdk_keyring_file.so.2.0 00:02:53.143 SO libspdk_keyring_linux.so.1.0 00:02:53.143 SO libspdk_accel_ioat.so.6.0 00:02:53.402 SO libspdk_blob_bdev.so.12.0 00:02:53.402 LIB libspdk_accel_error.a 00:02:53.402 CC module/scheduler/gscheduler/gscheduler.o 00:02:53.402 SYMLINK libspdk_keyring_linux.so 00:02:53.402 SYMLINK libspdk_accel_ioat.so 00:02:53.402 SO libspdk_accel_error.so.2.0 00:02:53.402 SYMLINK libspdk_keyring_file.so 00:02:53.402 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:53.402 CC module/fsdev/aio/linux_aio_mgr.o 00:02:53.402 SYMLINK libspdk_blob_bdev.so 00:02:53.402 CC module/accel/dsa/accel_dsa.o 00:02:53.402 SYMLINK libspdk_accel_error.so 00:02:53.402 CC module/accel/iaa/accel_iaa.o 00:02:53.402 CC module/accel/dsa/accel_dsa_rpc.o 00:02:53.402 LIB libspdk_scheduler_gscheduler.a 00:02:53.402 SO libspdk_scheduler_gscheduler.so.4.0 00:02:53.661 CC module/accel/iaa/accel_iaa_rpc.o 00:02:53.661 CC module/bdev/delay/vbdev_delay.o 00:02:53.661 CC module/blobfs/bdev/blobfs_bdev.o 00:02:53.661 SYMLINK libspdk_scheduler_gscheduler.so 00:02:53.661 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:53.661 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:53.661 CC module/bdev/error/vbdev_error.o 00:02:53.661 LIB libspdk_accel_dsa.a 00:02:53.661 CC module/bdev/gpt/gpt.o 00:02:53.661 LIB libspdk_accel_iaa.a 00:02:53.661 SO libspdk_accel_dsa.so.5.0 00:02:53.920 SO libspdk_accel_iaa.so.3.0 00:02:53.920 CC module/bdev/gpt/vbdev_gpt.o 00:02:53.920 CC module/bdev/error/vbdev_error_rpc.o 00:02:53.920 LIB libspdk_sock_posix.a 00:02:53.920 SYMLINK libspdk_accel_dsa.so 00:02:53.920 LIB libspdk_blobfs_bdev.a 00:02:53.920 LIB libspdk_fsdev_aio.a 00:02:53.920 SO libspdk_blobfs_bdev.so.6.0 00:02:53.920 SO libspdk_sock_posix.so.6.0 00:02:53.920 SYMLINK libspdk_accel_iaa.so 00:02:53.920 SO libspdk_fsdev_aio.so.1.0 00:02:53.920 SYMLINK libspdk_blobfs_bdev.so 00:02:53.920 SYMLINK libspdk_sock_posix.so 00:02:53.920 SYMLINK libspdk_fsdev_aio.so 00:02:53.920 LIB libspdk_bdev_error.a 00:02:53.920 LIB libspdk_bdev_delay.a 00:02:53.920 SO libspdk_bdev_error.so.6.0 00:02:54.178 CC module/bdev/lvol/vbdev_lvol.o 00:02:54.178 SO libspdk_bdev_delay.so.6.0 00:02:54.178 CC module/bdev/malloc/bdev_malloc.o 00:02:54.178 SYMLINK libspdk_bdev_error.so 00:02:54.178 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:54.178 CC module/bdev/nvme/bdev_nvme.o 00:02:54.178 CC module/bdev/null/bdev_null.o 00:02:54.178 CC module/bdev/passthru/vbdev_passthru.o 00:02:54.178 CC module/bdev/raid/bdev_raid.o 00:02:54.178 LIB libspdk_bdev_gpt.a 00:02:54.178 SYMLINK libspdk_bdev_delay.so 00:02:54.178 CC module/bdev/raid/bdev_raid_rpc.o 00:02:54.178 CC module/bdev/split/vbdev_split.o 00:02:54.178 SO libspdk_bdev_gpt.so.6.0 00:02:54.178 SYMLINK libspdk_bdev_gpt.so 00:02:54.178 CC module/bdev/split/vbdev_split_rpc.o 00:02:54.437 CC module/bdev/raid/bdev_raid_sb.o 00:02:54.437 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:54.437 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:54.437 LIB libspdk_bdev_split.a 00:02:54.437 CC module/bdev/null/bdev_null_rpc.o 00:02:54.437 SO libspdk_bdev_split.so.6.0 00:02:54.437 CC module/bdev/raid/raid0.o 00:02:54.437 SYMLINK libspdk_bdev_split.so 00:02:54.437 LIB libspdk_bdev_malloc.a 00:02:54.696 SO libspdk_bdev_malloc.so.6.0 00:02:54.696 LIB libspdk_bdev_passthru.a 00:02:54.696 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:54.696 LIB libspdk_bdev_null.a 00:02:54.696 SO libspdk_bdev_passthru.so.6.0 00:02:54.696 SYMLINK libspdk_bdev_malloc.so 00:02:54.696 CC module/bdev/raid/raid1.o 00:02:54.696 SO libspdk_bdev_null.so.6.0 00:02:54.696 CC module/bdev/raid/concat.o 00:02:54.696 SYMLINK libspdk_bdev_passthru.so 00:02:54.696 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:54.696 SYMLINK libspdk_bdev_null.so 00:02:54.955 LIB libspdk_bdev_lvol.a 00:02:54.955 CC module/bdev/aio/bdev_aio.o 00:02:54.955 SO libspdk_bdev_lvol.so.6.0 00:02:54.955 CC module/bdev/ftl/bdev_ftl.o 00:02:54.955 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:54.955 SYMLINK libspdk_bdev_lvol.so 00:02:54.955 CC module/bdev/iscsi/bdev_iscsi.o 00:02:54.955 CC module/bdev/raid/raid5f.o 00:02:55.224 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:55.224 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:55.224 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:55.224 LIB libspdk_bdev_ftl.a 00:02:55.225 SO libspdk_bdev_ftl.so.6.0 00:02:55.225 CC module/bdev/aio/bdev_aio_rpc.o 00:02:55.225 LIB libspdk_bdev_zone_block.a 00:02:55.489 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:55.489 SO libspdk_bdev_zone_block.so.6.0 00:02:55.489 SYMLINK libspdk_bdev_ftl.so 00:02:55.489 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:55.489 CC module/bdev/nvme/nvme_rpc.o 00:02:55.489 LIB libspdk_bdev_iscsi.a 00:02:55.489 CC module/bdev/nvme/bdev_mdns_client.o 00:02:55.489 SYMLINK libspdk_bdev_zone_block.so 00:02:55.489 CC module/bdev/nvme/vbdev_opal.o 00:02:55.489 SO libspdk_bdev_iscsi.so.6.0 00:02:55.489 LIB libspdk_bdev_aio.a 00:02:55.489 SO libspdk_bdev_aio.so.6.0 00:02:55.489 SYMLINK libspdk_bdev_iscsi.so 00:02:55.489 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:55.489 SYMLINK libspdk_bdev_aio.so 00:02:55.489 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:55.748 LIB libspdk_bdev_raid.a 00:02:55.748 SO libspdk_bdev_raid.so.6.0 00:02:55.748 LIB libspdk_bdev_virtio.a 00:02:55.748 SYMLINK libspdk_bdev_raid.so 00:02:55.748 SO libspdk_bdev_virtio.so.6.0 00:02:56.006 SYMLINK libspdk_bdev_virtio.so 00:02:57.385 LIB libspdk_bdev_nvme.a 00:02:57.385 SO libspdk_bdev_nvme.so.7.1 00:02:57.385 SYMLINK libspdk_bdev_nvme.so 00:02:57.645 CC module/event/subsystems/scheduler/scheduler.o 00:02:57.904 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:57.904 CC module/event/subsystems/keyring/keyring.o 00:02:57.904 CC module/event/subsystems/vmd/vmd.o 00:02:57.904 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:57.904 CC module/event/subsystems/sock/sock.o 00:02:57.904 CC module/event/subsystems/fsdev/fsdev.o 00:02:57.904 CC module/event/subsystems/iobuf/iobuf.o 00:02:57.904 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:57.904 LIB libspdk_event_keyring.a 00:02:57.904 LIB libspdk_event_scheduler.a 00:02:57.904 LIB libspdk_event_vhost_blk.a 00:02:57.904 SO libspdk_event_scheduler.so.4.0 00:02:57.904 SO libspdk_event_keyring.so.1.0 00:02:57.904 LIB libspdk_event_sock.a 00:02:57.904 LIB libspdk_event_fsdev.a 00:02:57.904 LIB libspdk_event_vmd.a 00:02:57.904 SO libspdk_event_vhost_blk.so.3.0 00:02:57.904 SO libspdk_event_sock.so.5.0 00:02:57.904 SO libspdk_event_fsdev.so.1.0 00:02:57.904 SO libspdk_event_vmd.so.6.0 00:02:57.904 LIB libspdk_event_iobuf.a 00:02:57.904 SYMLINK libspdk_event_scheduler.so 00:02:57.904 SYMLINK libspdk_event_keyring.so 00:02:58.163 SYMLINK libspdk_event_sock.so 00:02:58.163 SYMLINK libspdk_event_vhost_blk.so 00:02:58.163 SO libspdk_event_iobuf.so.3.0 00:02:58.163 SYMLINK libspdk_event_fsdev.so 00:02:58.163 SYMLINK libspdk_event_vmd.so 00:02:58.163 SYMLINK libspdk_event_iobuf.so 00:02:58.422 CC module/event/subsystems/accel/accel.o 00:02:58.422 LIB libspdk_event_accel.a 00:02:58.422 SO libspdk_event_accel.so.6.0 00:02:58.682 SYMLINK libspdk_event_accel.so 00:02:58.941 CC module/event/subsystems/bdev/bdev.o 00:02:58.941 LIB libspdk_event_bdev.a 00:02:59.200 SO libspdk_event_bdev.so.6.0 00:02:59.200 SYMLINK libspdk_event_bdev.so 00:02:59.459 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:59.459 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:59.459 CC module/event/subsystems/scsi/scsi.o 00:02:59.459 CC module/event/subsystems/nbd/nbd.o 00:02:59.459 CC module/event/subsystems/ublk/ublk.o 00:02:59.459 LIB libspdk_event_nbd.a 00:02:59.459 LIB libspdk_event_ublk.a 00:02:59.459 LIB libspdk_event_scsi.a 00:02:59.717 SO libspdk_event_nbd.so.6.0 00:02:59.717 SO libspdk_event_ublk.so.3.0 00:02:59.717 SO libspdk_event_scsi.so.6.0 00:02:59.717 SYMLINK libspdk_event_nbd.so 00:02:59.717 SYMLINK libspdk_event_ublk.so 00:02:59.717 LIB libspdk_event_nvmf.a 00:02:59.717 SYMLINK libspdk_event_scsi.so 00:02:59.717 SO libspdk_event_nvmf.so.6.0 00:02:59.717 SYMLINK libspdk_event_nvmf.so 00:02:59.975 CC module/event/subsystems/iscsi/iscsi.o 00:02:59.975 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:59.975 LIB libspdk_event_vhost_scsi.a 00:03:00.234 LIB libspdk_event_iscsi.a 00:03:00.234 SO libspdk_event_vhost_scsi.so.3.0 00:03:00.234 SO libspdk_event_iscsi.so.6.0 00:03:00.234 SYMLINK libspdk_event_vhost_scsi.so 00:03:00.234 SYMLINK libspdk_event_iscsi.so 00:03:00.493 SO libspdk.so.6.0 00:03:00.493 SYMLINK libspdk.so 00:03:00.493 CXX app/trace/trace.o 00:03:00.493 CC app/spdk_lspci/spdk_lspci.o 00:03:00.493 CC app/spdk_nvme_perf/perf.o 00:03:00.493 CC app/trace_record/trace_record.o 00:03:00.759 CC app/nvmf_tgt/nvmf_main.o 00:03:00.759 CC app/iscsi_tgt/iscsi_tgt.o 00:03:00.759 CC app/spdk_tgt/spdk_tgt.o 00:03:00.759 CC examples/ioat/perf/perf.o 00:03:00.759 CC examples/util/zipf/zipf.o 00:03:00.759 CC test/thread/poller_perf/poller_perf.o 00:03:00.759 LINK spdk_lspci 00:03:01.023 LINK poller_perf 00:03:01.023 LINK zipf 00:03:01.023 LINK nvmf_tgt 00:03:01.023 LINK iscsi_tgt 00:03:01.023 LINK spdk_trace_record 00:03:01.023 LINK spdk_tgt 00:03:01.023 LINK ioat_perf 00:03:01.023 LINK spdk_trace 00:03:01.023 CC app/spdk_nvme_identify/identify.o 00:03:01.282 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:01.282 CC examples/ioat/verify/verify.o 00:03:01.282 TEST_HEADER include/spdk/accel.h 00:03:01.282 TEST_HEADER include/spdk/accel_module.h 00:03:01.282 TEST_HEADER include/spdk/assert.h 00:03:01.282 TEST_HEADER include/spdk/barrier.h 00:03:01.282 TEST_HEADER include/spdk/base64.h 00:03:01.282 TEST_HEADER include/spdk/bdev.h 00:03:01.282 TEST_HEADER include/spdk/bdev_module.h 00:03:01.282 TEST_HEADER include/spdk/bdev_zone.h 00:03:01.282 TEST_HEADER include/spdk/bit_array.h 00:03:01.282 TEST_HEADER include/spdk/bit_pool.h 00:03:01.282 TEST_HEADER include/spdk/blob_bdev.h 00:03:01.282 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:01.282 TEST_HEADER include/spdk/blobfs.h 00:03:01.282 TEST_HEADER include/spdk/blob.h 00:03:01.282 CC test/dma/test_dma/test_dma.o 00:03:01.282 CC examples/sock/hello_world/hello_sock.o 00:03:01.282 TEST_HEADER include/spdk/conf.h 00:03:01.282 TEST_HEADER include/spdk/config.h 00:03:01.282 TEST_HEADER include/spdk/cpuset.h 00:03:01.282 TEST_HEADER include/spdk/crc16.h 00:03:01.282 TEST_HEADER include/spdk/crc32.h 00:03:01.282 TEST_HEADER include/spdk/crc64.h 00:03:01.282 TEST_HEADER include/spdk/dif.h 00:03:01.282 TEST_HEADER include/spdk/dma.h 00:03:01.282 TEST_HEADER include/spdk/endian.h 00:03:01.282 TEST_HEADER include/spdk/env_dpdk.h 00:03:01.282 TEST_HEADER include/spdk/env.h 00:03:01.282 TEST_HEADER include/spdk/event.h 00:03:01.282 TEST_HEADER include/spdk/fd_group.h 00:03:01.282 TEST_HEADER include/spdk/fd.h 00:03:01.282 CC examples/thread/thread/thread_ex.o 00:03:01.282 TEST_HEADER include/spdk/file.h 00:03:01.282 TEST_HEADER include/spdk/fsdev.h 00:03:01.282 TEST_HEADER include/spdk/fsdev_module.h 00:03:01.282 TEST_HEADER include/spdk/ftl.h 00:03:01.282 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:01.282 TEST_HEADER include/spdk/gpt_spec.h 00:03:01.540 TEST_HEADER include/spdk/hexlify.h 00:03:01.540 TEST_HEADER include/spdk/histogram_data.h 00:03:01.540 TEST_HEADER include/spdk/idxd.h 00:03:01.541 CC test/app/bdev_svc/bdev_svc.o 00:03:01.541 TEST_HEADER include/spdk/idxd_spec.h 00:03:01.541 TEST_HEADER include/spdk/init.h 00:03:01.541 TEST_HEADER include/spdk/ioat.h 00:03:01.541 TEST_HEADER include/spdk/ioat_spec.h 00:03:01.541 TEST_HEADER include/spdk/iscsi_spec.h 00:03:01.541 TEST_HEADER include/spdk/json.h 00:03:01.541 TEST_HEADER include/spdk/jsonrpc.h 00:03:01.541 TEST_HEADER include/spdk/keyring.h 00:03:01.541 TEST_HEADER include/spdk/keyring_module.h 00:03:01.541 LINK interrupt_tgt 00:03:01.541 TEST_HEADER include/spdk/likely.h 00:03:01.541 TEST_HEADER include/spdk/log.h 00:03:01.541 TEST_HEADER include/spdk/lvol.h 00:03:01.541 TEST_HEADER include/spdk/md5.h 00:03:01.541 TEST_HEADER include/spdk/memory.h 00:03:01.541 TEST_HEADER include/spdk/mmio.h 00:03:01.541 TEST_HEADER include/spdk/nbd.h 00:03:01.541 TEST_HEADER include/spdk/net.h 00:03:01.541 TEST_HEADER include/spdk/notify.h 00:03:01.541 TEST_HEADER include/spdk/nvme.h 00:03:01.541 TEST_HEADER include/spdk/nvme_intel.h 00:03:01.541 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:01.541 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:01.541 TEST_HEADER include/spdk/nvme_spec.h 00:03:01.541 TEST_HEADER include/spdk/nvme_zns.h 00:03:01.541 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:01.541 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:01.541 TEST_HEADER include/spdk/nvmf.h 00:03:01.541 TEST_HEADER include/spdk/nvmf_spec.h 00:03:01.541 TEST_HEADER include/spdk/nvmf_transport.h 00:03:01.541 TEST_HEADER include/spdk/opal.h 00:03:01.541 TEST_HEADER include/spdk/opal_spec.h 00:03:01.541 TEST_HEADER include/spdk/pci_ids.h 00:03:01.541 TEST_HEADER include/spdk/pipe.h 00:03:01.541 TEST_HEADER include/spdk/queue.h 00:03:01.541 TEST_HEADER include/spdk/reduce.h 00:03:01.541 TEST_HEADER include/spdk/rpc.h 00:03:01.541 TEST_HEADER include/spdk/scheduler.h 00:03:01.541 TEST_HEADER include/spdk/scsi.h 00:03:01.541 TEST_HEADER include/spdk/scsi_spec.h 00:03:01.541 TEST_HEADER include/spdk/sock.h 00:03:01.541 TEST_HEADER include/spdk/stdinc.h 00:03:01.541 TEST_HEADER include/spdk/string.h 00:03:01.541 TEST_HEADER include/spdk/thread.h 00:03:01.541 TEST_HEADER include/spdk/trace.h 00:03:01.541 TEST_HEADER include/spdk/trace_parser.h 00:03:01.541 TEST_HEADER include/spdk/tree.h 00:03:01.541 TEST_HEADER include/spdk/ublk.h 00:03:01.541 TEST_HEADER include/spdk/util.h 00:03:01.541 TEST_HEADER include/spdk/uuid.h 00:03:01.541 TEST_HEADER include/spdk/version.h 00:03:01.541 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:01.541 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:01.541 TEST_HEADER include/spdk/vhost.h 00:03:01.541 TEST_HEADER include/spdk/vmd.h 00:03:01.541 TEST_HEADER include/spdk/xor.h 00:03:01.541 TEST_HEADER include/spdk/zipf.h 00:03:01.541 CXX test/cpp_headers/accel.o 00:03:01.541 CC test/env/mem_callbacks/mem_callbacks.o 00:03:01.541 LINK verify 00:03:01.541 LINK bdev_svc 00:03:01.541 LINK hello_sock 00:03:01.800 LINK thread 00:03:01.800 CXX test/cpp_headers/accel_module.o 00:03:01.800 LINK spdk_nvme_perf 00:03:01.800 CC examples/vmd/lsvmd/lsvmd.o 00:03:01.800 CC test/env/vtophys/vtophys.o 00:03:01.800 CXX test/cpp_headers/assert.o 00:03:01.800 CC app/spdk_nvme_discover/discovery_aer.o 00:03:01.800 CXX test/cpp_headers/barrier.o 00:03:02.059 LINK lsvmd 00:03:02.059 LINK test_dma 00:03:02.059 LINK vtophys 00:03:02.059 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:02.059 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:02.059 CXX test/cpp_headers/base64.o 00:03:02.059 CXX test/cpp_headers/bdev.o 00:03:02.059 LINK spdk_nvme_discover 00:03:02.059 CC examples/vmd/led/led.o 00:03:02.059 LINK env_dpdk_post_init 00:03:02.059 LINK spdk_nvme_identify 00:03:02.318 LINK mem_callbacks 00:03:02.318 CC test/event/event_perf/event_perf.o 00:03:02.318 CXX test/cpp_headers/bdev_module.o 00:03:02.318 CXX test/cpp_headers/bdev_zone.o 00:03:02.318 LINK led 00:03:02.318 CC examples/idxd/perf/perf.o 00:03:02.318 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:02.318 LINK event_perf 00:03:02.318 CC test/env/memory/memory_ut.o 00:03:02.577 CC app/spdk_top/spdk_top.o 00:03:02.577 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:02.577 LINK nvme_fuzz 00:03:02.577 CXX test/cpp_headers/bit_array.o 00:03:02.577 CC test/event/reactor/reactor.o 00:03:02.577 CC examples/accel/perf/accel_perf.o 00:03:02.577 CXX test/cpp_headers/bit_pool.o 00:03:02.835 CC examples/blob/hello_world/hello_blob.o 00:03:02.835 LINK idxd_perf 00:03:02.835 LINK hello_fsdev 00:03:02.835 LINK reactor 00:03:02.835 CXX test/cpp_headers/blob_bdev.o 00:03:02.835 CC examples/nvme/hello_world/hello_world.o 00:03:03.093 CC test/app/histogram_perf/histogram_perf.o 00:03:03.093 CXX test/cpp_headers/blobfs_bdev.o 00:03:03.093 LINK hello_blob 00:03:03.093 CC test/event/reactor_perf/reactor_perf.o 00:03:03.093 LINK histogram_perf 00:03:03.093 CC test/env/pci/pci_ut.o 00:03:03.352 LINK hello_world 00:03:03.352 LINK reactor_perf 00:03:03.352 CXX test/cpp_headers/blobfs.o 00:03:03.352 LINK accel_perf 00:03:03.352 CC examples/blob/cli/blobcli.o 00:03:03.352 CC test/app/jsoncat/jsoncat.o 00:03:03.352 CXX test/cpp_headers/blob.o 00:03:03.352 CC test/event/app_repeat/app_repeat.o 00:03:03.612 CC examples/nvme/reconnect/reconnect.o 00:03:03.612 CC test/app/stub/stub.o 00:03:03.612 LINK spdk_top 00:03:03.612 LINK jsoncat 00:03:03.612 CXX test/cpp_headers/conf.o 00:03:03.612 LINK app_repeat 00:03:03.612 LINK pci_ut 00:03:03.870 LINK memory_ut 00:03:03.870 LINK stub 00:03:03.870 CXX test/cpp_headers/config.o 00:03:03.870 CXX test/cpp_headers/cpuset.o 00:03:03.870 CC app/vhost/vhost.o 00:03:03.870 LINK reconnect 00:03:03.870 LINK blobcli 00:03:03.870 CC test/event/scheduler/scheduler.o 00:03:03.870 CC examples/bdev/hello_world/hello_bdev.o 00:03:04.129 CXX test/cpp_headers/crc16.o 00:03:04.129 CC test/rpc_client/rpc_client_test.o 00:03:04.129 LINK vhost 00:03:04.129 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:04.129 CC test/accel/dif/dif.o 00:03:04.129 CC test/blobfs/mkfs/mkfs.o 00:03:04.129 CXX test/cpp_headers/crc32.o 00:03:04.129 LINK scheduler 00:03:04.129 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:04.129 LINK hello_bdev 00:03:04.387 LINK rpc_client_test 00:03:04.387 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:04.387 CXX test/cpp_headers/crc64.o 00:03:04.387 LINK mkfs 00:03:04.387 CC app/spdk_dd/spdk_dd.o 00:03:04.645 LINK iscsi_fuzz 00:03:04.645 CC examples/bdev/bdevperf/bdevperf.o 00:03:04.645 CXX test/cpp_headers/dif.o 00:03:04.645 CC test/nvme/aer/aer.o 00:03:04.645 CC test/lvol/esnap/esnap.o 00:03:04.645 CC test/nvme/reset/reset.o 00:03:04.904 LINK nvme_manage 00:03:04.904 CXX test/cpp_headers/dma.o 00:03:04.904 LINK vhost_fuzz 00:03:04.904 LINK spdk_dd 00:03:04.904 LINK aer 00:03:04.904 CC app/fio/nvme/fio_plugin.o 00:03:04.904 CXX test/cpp_headers/endian.o 00:03:05.162 CC examples/nvme/arbitration/arbitration.o 00:03:05.162 LINK dif 00:03:05.162 LINK reset 00:03:05.162 CC app/fio/bdev/fio_plugin.o 00:03:05.162 CC examples/nvme/hotplug/hotplug.o 00:03:05.162 CXX test/cpp_headers/env_dpdk.o 00:03:05.162 CC test/nvme/sgl/sgl.o 00:03:05.162 CXX test/cpp_headers/env.o 00:03:05.420 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:05.420 CXX test/cpp_headers/event.o 00:03:05.420 LINK arbitration 00:03:05.420 LINK hotplug 00:03:05.420 CC examples/nvme/abort/abort.o 00:03:05.420 LINK sgl 00:03:05.420 LINK cmb_copy 00:03:05.678 LINK bdevperf 00:03:05.678 CXX test/cpp_headers/fd_group.o 00:03:05.678 CXX test/cpp_headers/fd.o 00:03:05.678 CXX test/cpp_headers/file.o 00:03:05.678 LINK spdk_nvme 00:03:05.678 LINK spdk_bdev 00:03:05.678 CC test/nvme/e2edp/nvme_dp.o 00:03:05.678 CXX test/cpp_headers/fsdev.o 00:03:05.678 CC test/nvme/overhead/overhead.o 00:03:05.678 CXX test/cpp_headers/fsdev_module.o 00:03:05.678 CXX test/cpp_headers/ftl.o 00:03:05.935 CXX test/cpp_headers/fuse_dispatcher.o 00:03:05.935 CXX test/cpp_headers/gpt_spec.o 00:03:05.935 CC test/nvme/err_injection/err_injection.o 00:03:05.935 CXX test/cpp_headers/hexlify.o 00:03:05.935 LINK abort 00:03:05.935 CXX test/cpp_headers/histogram_data.o 00:03:05.935 CXX test/cpp_headers/idxd.o 00:03:06.193 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:06.193 LINK nvme_dp 00:03:06.193 LINK err_injection 00:03:06.193 LINK overhead 00:03:06.193 CXX test/cpp_headers/idxd_spec.o 00:03:06.193 CXX test/cpp_headers/init.o 00:03:06.193 CC test/bdev/bdevio/bdevio.o 00:03:06.193 CXX test/cpp_headers/ioat.o 00:03:06.193 CXX test/cpp_headers/ioat_spec.o 00:03:06.193 CXX test/cpp_headers/iscsi_spec.o 00:03:06.193 LINK pmr_persistence 00:03:06.193 CC test/nvme/startup/startup.o 00:03:06.193 CXX test/cpp_headers/json.o 00:03:06.451 CC test/nvme/reserve/reserve.o 00:03:06.451 CXX test/cpp_headers/jsonrpc.o 00:03:06.451 CXX test/cpp_headers/keyring.o 00:03:06.451 CXX test/cpp_headers/keyring_module.o 00:03:06.451 LINK startup 00:03:06.451 CC test/nvme/simple_copy/simple_copy.o 00:03:06.451 CXX test/cpp_headers/likely.o 00:03:06.451 CC test/nvme/connect_stress/connect_stress.o 00:03:06.710 LINK reserve 00:03:06.710 CC examples/nvmf/nvmf/nvmf.o 00:03:06.710 LINK bdevio 00:03:06.710 CXX test/cpp_headers/log.o 00:03:06.710 CC test/nvme/compliance/nvme_compliance.o 00:03:06.710 CC test/nvme/boot_partition/boot_partition.o 00:03:06.710 LINK connect_stress 00:03:06.710 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.710 LINK simple_copy 00:03:06.969 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:06.969 CXX test/cpp_headers/lvol.o 00:03:06.969 LINK boot_partition 00:03:06.969 CC test/nvme/fdp/fdp.o 00:03:06.969 CXX test/cpp_headers/md5.o 00:03:06.969 CC test/nvme/cuse/cuse.o 00:03:06.969 LINK fused_ordering 00:03:06.969 LINK nvmf 00:03:06.969 LINK doorbell_aers 00:03:06.969 CXX test/cpp_headers/memory.o 00:03:06.969 CXX test/cpp_headers/mmio.o 00:03:07.228 LINK nvme_compliance 00:03:07.228 CXX test/cpp_headers/nbd.o 00:03:07.228 CXX test/cpp_headers/net.o 00:03:07.228 CXX test/cpp_headers/notify.o 00:03:07.228 CXX test/cpp_headers/nvme.o 00:03:07.228 CXX test/cpp_headers/nvme_intel.o 00:03:07.228 CXX test/cpp_headers/nvme_ocssd.o 00:03:07.228 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:07.228 CXX test/cpp_headers/nvme_spec.o 00:03:07.487 LINK fdp 00:03:07.487 CXX test/cpp_headers/nvme_zns.o 00:03:07.487 CXX test/cpp_headers/nvmf_cmd.o 00:03:07.487 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:07.487 CXX test/cpp_headers/nvmf.o 00:03:07.487 CXX test/cpp_headers/nvmf_spec.o 00:03:07.487 CXX test/cpp_headers/nvmf_transport.o 00:03:07.487 CXX test/cpp_headers/opal.o 00:03:07.487 CXX test/cpp_headers/opal_spec.o 00:03:07.487 CXX test/cpp_headers/pci_ids.o 00:03:07.487 CXX test/cpp_headers/pipe.o 00:03:07.747 CXX test/cpp_headers/queue.o 00:03:07.747 CXX test/cpp_headers/reduce.o 00:03:07.747 CXX test/cpp_headers/rpc.o 00:03:07.747 CXX test/cpp_headers/scheduler.o 00:03:07.747 CXX test/cpp_headers/scsi.o 00:03:07.747 CXX test/cpp_headers/scsi_spec.o 00:03:07.747 CXX test/cpp_headers/sock.o 00:03:07.747 CXX test/cpp_headers/stdinc.o 00:03:07.747 CXX test/cpp_headers/string.o 00:03:07.747 CXX test/cpp_headers/thread.o 00:03:07.747 CXX test/cpp_headers/trace.o 00:03:08.006 CXX test/cpp_headers/trace_parser.o 00:03:08.006 CXX test/cpp_headers/tree.o 00:03:08.006 CXX test/cpp_headers/ublk.o 00:03:08.006 CXX test/cpp_headers/util.o 00:03:08.006 CXX test/cpp_headers/uuid.o 00:03:08.006 CXX test/cpp_headers/version.o 00:03:08.006 CXX test/cpp_headers/vfio_user_pci.o 00:03:08.006 CXX test/cpp_headers/vfio_user_spec.o 00:03:08.006 CXX test/cpp_headers/vhost.o 00:03:08.006 CXX test/cpp_headers/vmd.o 00:03:08.006 CXX test/cpp_headers/xor.o 00:03:08.006 CXX test/cpp_headers/zipf.o 00:03:08.574 LINK cuse 00:03:11.108 LINK esnap 00:03:11.367 00:03:11.367 real 1m30.335s 00:03:11.367 user 8m35.029s 00:03:11.367 sys 1m41.714s 00:03:11.367 13:15:59 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:11.367 ************************************ 00:03:11.367 END TEST make 00:03:11.367 ************************************ 00:03:11.367 13:15:59 make -- common/autotest_common.sh@10 -- $ set +x 00:03:11.367 13:15:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:11.367 13:15:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:11.367 13:15:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:11.367 13:15:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.367 13:15:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:11.367 13:15:59 -- pm/common@44 -- $ pid=5251 00:03:11.367 13:15:59 -- pm/common@50 -- $ kill -TERM 5251 00:03:11.367 13:15:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.367 13:15:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:11.367 13:15:59 -- pm/common@44 -- $ pid=5253 00:03:11.367 13:15:59 -- pm/common@50 -- $ kill -TERM 5253 00:03:11.367 13:15:59 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:11.367 13:15:59 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:11.367 13:15:59 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:11.367 13:15:59 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:11.367 13:15:59 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:11.628 13:15:59 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:11.628 13:15:59 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:11.628 13:15:59 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:11.628 13:15:59 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:11.628 13:15:59 -- scripts/common.sh@336 -- # IFS=.-: 00:03:11.628 13:15:59 -- scripts/common.sh@336 -- # read -ra ver1 00:03:11.628 13:15:59 -- scripts/common.sh@337 -- # IFS=.-: 00:03:11.628 13:15:59 -- scripts/common.sh@337 -- # read -ra ver2 00:03:11.628 13:15:59 -- scripts/common.sh@338 -- # local 'op=<' 00:03:11.628 13:15:59 -- scripts/common.sh@340 -- # ver1_l=2 00:03:11.628 13:15:59 -- scripts/common.sh@341 -- # ver2_l=1 00:03:11.628 13:15:59 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:11.628 13:15:59 -- scripts/common.sh@344 -- # case "$op" in 00:03:11.628 13:15:59 -- scripts/common.sh@345 -- # : 1 00:03:11.628 13:15:59 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:11.628 13:15:59 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.628 13:15:59 -- scripts/common.sh@365 -- # decimal 1 00:03:11.628 13:15:59 -- scripts/common.sh@353 -- # local d=1 00:03:11.628 13:15:59 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:11.628 13:15:59 -- scripts/common.sh@355 -- # echo 1 00:03:11.628 13:15:59 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:11.628 13:15:59 -- scripts/common.sh@366 -- # decimal 2 00:03:11.628 13:15:59 -- scripts/common.sh@353 -- # local d=2 00:03:11.628 13:15:59 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:11.628 13:15:59 -- scripts/common.sh@355 -- # echo 2 00:03:11.628 13:15:59 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:11.628 13:15:59 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:11.628 13:15:59 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:11.628 13:15:59 -- scripts/common.sh@368 -- # return 0 00:03:11.628 13:15:59 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:11.628 13:15:59 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:11.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.628 --rc genhtml_branch_coverage=1 00:03:11.628 --rc genhtml_function_coverage=1 00:03:11.628 --rc genhtml_legend=1 00:03:11.628 --rc geninfo_all_blocks=1 00:03:11.628 --rc geninfo_unexecuted_blocks=1 00:03:11.628 00:03:11.628 ' 00:03:11.628 13:15:59 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:11.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.628 --rc genhtml_branch_coverage=1 00:03:11.628 --rc genhtml_function_coverage=1 00:03:11.628 --rc genhtml_legend=1 00:03:11.628 --rc geninfo_all_blocks=1 00:03:11.628 --rc geninfo_unexecuted_blocks=1 00:03:11.628 00:03:11.628 ' 00:03:11.628 13:15:59 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:11.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.628 --rc genhtml_branch_coverage=1 00:03:11.628 --rc genhtml_function_coverage=1 00:03:11.628 --rc genhtml_legend=1 00:03:11.628 --rc geninfo_all_blocks=1 00:03:11.628 --rc geninfo_unexecuted_blocks=1 00:03:11.628 00:03:11.628 ' 00:03:11.628 13:15:59 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:11.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.628 --rc genhtml_branch_coverage=1 00:03:11.628 --rc genhtml_function_coverage=1 00:03:11.628 --rc genhtml_legend=1 00:03:11.628 --rc geninfo_all_blocks=1 00:03:11.628 --rc geninfo_unexecuted_blocks=1 00:03:11.628 00:03:11.628 ' 00:03:11.628 13:15:59 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:11.628 13:15:59 -- nvmf/common.sh@7 -- # uname -s 00:03:11.628 13:15:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:11.628 13:15:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:11.628 13:15:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:11.628 13:15:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:11.628 13:15:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:11.628 13:15:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:11.628 13:15:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:11.628 13:15:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:11.628 13:15:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:11.628 13:15:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:11.628 13:16:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:133dc7ed-3b82-427d-81c6-87c2a8a96ca8 00:03:11.628 13:16:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=133dc7ed-3b82-427d-81c6-87c2a8a96ca8 00:03:11.628 13:16:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:11.628 13:16:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:11.628 13:16:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:11.628 13:16:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:11.628 13:16:00 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:11.628 13:16:00 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:11.628 13:16:00 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:11.628 13:16:00 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:11.628 13:16:00 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:11.628 13:16:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.628 13:16:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.628 13:16:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.628 13:16:00 -- paths/export.sh@5 -- # export PATH 00:03:11.628 13:16:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.628 13:16:00 -- nvmf/common.sh@51 -- # : 0 00:03:11.628 13:16:00 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:11.628 13:16:00 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:11.628 13:16:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:11.628 13:16:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:11.628 13:16:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:11.628 13:16:00 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:11.628 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:11.628 13:16:00 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:11.628 13:16:00 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:11.628 13:16:00 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:11.628 13:16:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:11.628 13:16:00 -- spdk/autotest.sh@32 -- # uname -s 00:03:11.628 13:16:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:11.628 13:16:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:11.628 13:16:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:11.628 13:16:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:11.628 13:16:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:11.628 13:16:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:11.628 13:16:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:11.628 13:16:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:11.628 13:16:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:11.628 13:16:00 -- spdk/autotest.sh@48 -- # udevadm_pid=54206 00:03:11.628 13:16:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:11.628 13:16:00 -- pm/common@17 -- # local monitor 00:03:11.628 13:16:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.628 13:16:00 -- pm/common@21 -- # date +%s 00:03:11.628 13:16:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.628 13:16:00 -- pm/common@25 -- # sleep 1 00:03:11.628 13:16:00 -- pm/common@21 -- # date +%s 00:03:11.628 13:16:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732626960 00:03:11.628 13:16:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732626960 00:03:11.628 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732626960_collect-vmstat.pm.log 00:03:11.628 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732626960_collect-cpu-load.pm.log 00:03:12.566 13:16:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:12.566 13:16:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:12.566 13:16:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:12.566 13:16:01 -- common/autotest_common.sh@10 -- # set +x 00:03:12.566 13:16:01 -- spdk/autotest.sh@59 -- # create_test_list 00:03:12.566 13:16:01 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:12.566 13:16:01 -- common/autotest_common.sh@10 -- # set +x 00:03:12.566 13:16:01 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:12.825 13:16:01 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:12.825 13:16:01 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:12.825 13:16:01 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:12.825 13:16:01 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:12.825 13:16:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:12.825 13:16:01 -- common/autotest_common.sh@1457 -- # uname 00:03:12.825 13:16:01 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:12.825 13:16:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:12.825 13:16:01 -- common/autotest_common.sh@1477 -- # uname 00:03:12.825 13:16:01 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:12.825 13:16:01 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:12.825 13:16:01 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:12.825 lcov: LCOV version 1.15 00:03:12.825 13:16:01 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:27.758 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:27.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:42.638 13:16:28 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:42.638 13:16:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.638 13:16:28 -- common/autotest_common.sh@10 -- # set +x 00:03:42.638 13:16:28 -- spdk/autotest.sh@78 -- # rm -f 00:03:42.638 13:16:28 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:42.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.638 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:42.638 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:42.638 13:16:29 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:42.638 13:16:29 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:42.638 13:16:29 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:42.638 13:16:29 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:42.638 13:16:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:42.638 13:16:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:42.638 13:16:29 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:42.638 13:16:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:42.638 13:16:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:42.638 13:16:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:42.638 13:16:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:42.638 13:16:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:42.638 13:16:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:42.638 13:16:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:42.638 13:16:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:42.638 13:16:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:42.638 13:16:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:42.638 13:16:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:42.638 13:16:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:42.638 13:16:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:42.638 13:16:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:42.638 13:16:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:42.638 13:16:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:42.638 13:16:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:42.638 13:16:29 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:42.638 13:16:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.638 13:16:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.638 13:16:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:42.638 13:16:29 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:42.638 13:16:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:42.638 No valid GPT data, bailing 00:03:42.638 13:16:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:42.638 13:16:29 -- scripts/common.sh@394 -- # pt= 00:03:42.638 13:16:29 -- scripts/common.sh@395 -- # return 1 00:03:42.638 13:16:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:42.638 1+0 records in 00:03:42.638 1+0 records out 00:03:42.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00462126 s, 227 MB/s 00:03:42.638 13:16:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.638 13:16:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.638 13:16:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:42.638 13:16:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:42.638 13:16:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:42.638 No valid GPT data, bailing 00:03:42.638 13:16:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:42.638 13:16:29 -- scripts/common.sh@394 -- # pt= 00:03:42.638 13:16:29 -- scripts/common.sh@395 -- # return 1 00:03:42.638 13:16:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:42.638 1+0 records in 00:03:42.638 1+0 records out 00:03:42.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00381856 s, 275 MB/s 00:03:42.638 13:16:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.638 13:16:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.638 13:16:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:42.638 13:16:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:42.638 13:16:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:42.638 No valid GPT data, bailing 00:03:42.638 13:16:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:42.638 13:16:29 -- scripts/common.sh@394 -- # pt= 00:03:42.638 13:16:29 -- scripts/common.sh@395 -- # return 1 00:03:42.639 13:16:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:42.639 1+0 records in 00:03:42.639 1+0 records out 00:03:42.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00404149 s, 259 MB/s 00:03:42.639 13:16:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.639 13:16:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.639 13:16:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:42.639 13:16:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:42.639 13:16:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:42.639 No valid GPT data, bailing 00:03:42.639 13:16:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:42.639 13:16:29 -- scripts/common.sh@394 -- # pt= 00:03:42.639 13:16:29 -- scripts/common.sh@395 -- # return 1 00:03:42.639 13:16:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:42.639 1+0 records in 00:03:42.639 1+0 records out 00:03:42.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00369884 s, 283 MB/s 00:03:42.639 13:16:29 -- spdk/autotest.sh@105 -- # sync 00:03:42.639 13:16:29 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:42.639 13:16:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:42.639 13:16:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:43.576 13:16:31 -- spdk/autotest.sh@111 -- # uname -s 00:03:43.576 13:16:31 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:43.576 13:16:31 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:43.576 13:16:31 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:44.145 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.145 Hugepages 00:03:44.145 node hugesize free / total 00:03:44.145 node0 1048576kB 0 / 0 00:03:44.145 node0 2048kB 0 / 0 00:03:44.145 00:03:44.145 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:44.145 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:44.145 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:44.145 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:44.145 13:16:32 -- spdk/autotest.sh@117 -- # uname -s 00:03:44.145 13:16:32 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:44.145 13:16:32 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:44.145 13:16:32 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.083 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.083 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:45.083 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:45.083 13:16:33 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:46.040 13:16:34 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:46.040 13:16:34 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:46.040 13:16:34 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:46.040 13:16:34 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:46.040 13:16:34 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:46.040 13:16:34 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:46.040 13:16:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:46.040 13:16:34 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:46.040 13:16:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:46.040 13:16:34 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:46.040 13:16:34 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:46.040 13:16:34 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:46.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.622 Waiting for block devices as requested 00:03:46.622 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:46.622 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:46.622 13:16:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:46.622 13:16:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:46.622 13:16:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:46.622 13:16:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:46.622 13:16:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:46.622 13:16:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:46.622 13:16:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:46.622 13:16:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:46.622 13:16:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:46.622 13:16:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:46.622 13:16:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:46.622 13:16:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:46.622 13:16:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:46.622 13:16:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:46.622 13:16:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:46.622 13:16:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:46.622 13:16:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:46.622 13:16:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:46.622 13:16:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:46.880 13:16:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:46.880 13:16:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:46.880 13:16:35 -- common/autotest_common.sh@1543 -- # continue 00:03:46.880 13:16:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:46.880 13:16:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:46.880 13:16:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:46.880 13:16:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:46.880 13:16:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:46.880 13:16:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:46.881 13:16:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:46.881 13:16:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:46.881 13:16:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:46.881 13:16:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:46.881 13:16:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:46.881 13:16:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:46.881 13:16:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:46.881 13:16:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:46.881 13:16:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:46.881 13:16:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:46.881 13:16:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:46.881 13:16:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:46.881 13:16:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:46.881 13:16:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:46.881 13:16:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:46.881 13:16:35 -- common/autotest_common.sh@1543 -- # continue 00:03:46.881 13:16:35 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:46.881 13:16:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:46.881 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:03:46.881 13:16:35 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:46.881 13:16:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:46.881 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:03:46.881 13:16:35 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.449 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.449 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:47.708 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:47.708 13:16:36 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:47.708 13:16:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:47.708 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:03:47.708 13:16:36 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:47.708 13:16:36 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:47.708 13:16:36 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:47.708 13:16:36 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:47.708 13:16:36 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:47.708 13:16:36 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:47.708 13:16:36 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:47.708 13:16:36 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:47.708 13:16:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:47.708 13:16:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:47.708 13:16:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:47.708 13:16:36 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:47.708 13:16:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:47.708 13:16:36 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:47.708 13:16:36 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:47.708 13:16:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:47.708 13:16:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:47.708 13:16:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:47.708 13:16:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:47.708 13:16:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:47.708 13:16:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:47.708 13:16:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:47.708 13:16:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:47.708 13:16:36 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:47.708 13:16:36 -- common/autotest_common.sh@1572 -- # return 0 00:03:47.708 13:16:36 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:47.708 13:16:36 -- common/autotest_common.sh@1580 -- # return 0 00:03:47.708 13:16:36 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:47.708 13:16:36 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:47.708 13:16:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:47.708 13:16:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:47.708 13:16:36 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:47.708 13:16:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.708 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:03:47.708 13:16:36 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:47.708 13:16:36 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:47.708 13:16:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.708 13:16:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.708 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:03:47.708 ************************************ 00:03:47.708 START TEST env 00:03:47.708 ************************************ 00:03:47.708 13:16:36 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:47.967 * Looking for test storage... 00:03:47.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:47.967 13:16:36 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:47.967 13:16:36 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:47.967 13:16:36 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:47.967 13:16:36 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:47.967 13:16:36 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:47.967 13:16:36 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:47.967 13:16:36 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:47.967 13:16:36 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:47.967 13:16:36 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:47.967 13:16:36 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:47.967 13:16:36 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:47.967 13:16:36 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:47.967 13:16:36 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:47.967 13:16:36 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:47.967 13:16:36 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:47.967 13:16:36 env -- scripts/common.sh@344 -- # case "$op" in 00:03:47.967 13:16:36 env -- scripts/common.sh@345 -- # : 1 00:03:47.967 13:16:36 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:47.967 13:16:36 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:47.967 13:16:36 env -- scripts/common.sh@365 -- # decimal 1 00:03:47.967 13:16:36 env -- scripts/common.sh@353 -- # local d=1 00:03:47.967 13:16:36 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:47.967 13:16:36 env -- scripts/common.sh@355 -- # echo 1 00:03:47.967 13:16:36 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:47.967 13:16:36 env -- scripts/common.sh@366 -- # decimal 2 00:03:47.967 13:16:36 env -- scripts/common.sh@353 -- # local d=2 00:03:47.967 13:16:36 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:47.967 13:16:36 env -- scripts/common.sh@355 -- # echo 2 00:03:47.967 13:16:36 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:47.967 13:16:36 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:47.967 13:16:36 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:47.967 13:16:36 env -- scripts/common.sh@368 -- # return 0 00:03:47.967 13:16:36 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:47.968 13:16:36 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:47.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.968 --rc genhtml_branch_coverage=1 00:03:47.968 --rc genhtml_function_coverage=1 00:03:47.968 --rc genhtml_legend=1 00:03:47.968 --rc geninfo_all_blocks=1 00:03:47.968 --rc geninfo_unexecuted_blocks=1 00:03:47.968 00:03:47.968 ' 00:03:47.968 13:16:36 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:47.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.968 --rc genhtml_branch_coverage=1 00:03:47.968 --rc genhtml_function_coverage=1 00:03:47.968 --rc genhtml_legend=1 00:03:47.968 --rc geninfo_all_blocks=1 00:03:47.968 --rc geninfo_unexecuted_blocks=1 00:03:47.968 00:03:47.968 ' 00:03:47.968 13:16:36 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:47.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.968 --rc genhtml_branch_coverage=1 00:03:47.968 --rc genhtml_function_coverage=1 00:03:47.968 --rc genhtml_legend=1 00:03:47.968 --rc geninfo_all_blocks=1 00:03:47.968 --rc geninfo_unexecuted_blocks=1 00:03:47.968 00:03:47.968 ' 00:03:47.968 13:16:36 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:47.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.968 --rc genhtml_branch_coverage=1 00:03:47.968 --rc genhtml_function_coverage=1 00:03:47.968 --rc genhtml_legend=1 00:03:47.968 --rc geninfo_all_blocks=1 00:03:47.968 --rc geninfo_unexecuted_blocks=1 00:03:47.968 00:03:47.968 ' 00:03:47.968 13:16:36 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:47.968 13:16:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.968 13:16:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.968 13:16:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:47.968 ************************************ 00:03:47.968 START TEST env_memory 00:03:47.968 ************************************ 00:03:47.968 13:16:36 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:47.968 00:03:47.968 00:03:47.968 CUnit - A unit testing framework for C - Version 2.1-3 00:03:47.968 http://cunit.sourceforge.net/ 00:03:47.968 00:03:47.968 00:03:47.968 Suite: memory 00:03:47.968 Test: alloc and free memory map ...[2024-11-26 13:16:36.502463] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:48.227 passed 00:03:48.227 Test: mem map translation ...[2024-11-26 13:16:36.562682] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:48.227 [2024-11-26 13:16:36.562754] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:48.227 [2024-11-26 13:16:36.562853] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:48.227 [2024-11-26 13:16:36.562891] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:48.227 passed 00:03:48.227 Test: mem map registration ...[2024-11-26 13:16:36.663944] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:48.227 [2024-11-26 13:16:36.664016] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:48.227 passed 00:03:48.227 Test: mem map adjacent registrations ...passed 00:03:48.227 00:03:48.227 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.227 suites 1 1 n/a 0 0 00:03:48.227 tests 4 4 4 0 0 00:03:48.227 asserts 152 152 152 0 n/a 00:03:48.227 00:03:48.227 Elapsed time = 0.329 seconds 00:03:48.487 00:03:48.487 real 0m0.367s 00:03:48.487 user 0m0.338s 00:03:48.487 sys 0m0.023s 00:03:48.487 13:16:36 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.487 ************************************ 00:03:48.487 END TEST env_memory 00:03:48.487 ************************************ 00:03:48.487 13:16:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:48.487 13:16:36 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:48.487 13:16:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.487 13:16:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.487 13:16:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.487 ************************************ 00:03:48.487 START TEST env_vtophys 00:03:48.487 ************************************ 00:03:48.487 13:16:36 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:48.487 EAL: lib.eal log level changed from notice to debug 00:03:48.487 EAL: Detected lcore 0 as core 0 on socket 0 00:03:48.487 EAL: Detected lcore 1 as core 0 on socket 0 00:03:48.487 EAL: Detected lcore 2 as core 0 on socket 0 00:03:48.487 EAL: Detected lcore 3 as core 0 on socket 0 00:03:48.487 EAL: Detected lcore 4 as core 0 on socket 0 00:03:48.487 EAL: Detected lcore 5 as core 0 on socket 0 00:03:48.487 EAL: Detected lcore 6 as core 0 on socket 0 00:03:48.487 EAL: Detected lcore 7 as core 0 on socket 0 00:03:48.487 EAL: Detected lcore 8 as core 0 on socket 0 00:03:48.487 EAL: Detected lcore 9 as core 0 on socket 0 00:03:48.487 EAL: Maximum logical cores by configuration: 128 00:03:48.487 EAL: Detected CPU lcores: 10 00:03:48.487 EAL: Detected NUMA nodes: 1 00:03:48.487 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:48.487 EAL: Detected shared linkage of DPDK 00:03:48.487 EAL: No shared files mode enabled, IPC will be disabled 00:03:48.487 EAL: Selected IOVA mode 'PA' 00:03:48.487 EAL: Probing VFIO support... 00:03:48.487 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:48.487 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:48.487 EAL: Ask a virtual area of 0x2e000 bytes 00:03:48.487 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:48.487 EAL: Setting up physically contiguous memory... 00:03:48.487 EAL: Setting maximum number of open files to 524288 00:03:48.487 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:48.487 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:48.487 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.487 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:48.487 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.487 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.487 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:48.487 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:48.487 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.487 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:48.487 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.487 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.487 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:48.487 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:48.487 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.487 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:48.487 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.487 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.487 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:48.487 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:48.487 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.487 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:48.487 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.487 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.487 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:48.487 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:48.487 EAL: Hugepages will be freed exactly as allocated. 00:03:48.487 EAL: No shared files mode enabled, IPC is disabled 00:03:48.487 EAL: No shared files mode enabled, IPC is disabled 00:03:48.487 EAL: TSC frequency is ~2200000 KHz 00:03:48.487 EAL: Main lcore 0 is ready (tid=7ff527811a40;cpuset=[0]) 00:03:48.487 EAL: Trying to obtain current memory policy. 00:03:48.487 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.746 EAL: Restoring previous memory policy: 0 00:03:48.746 EAL: request: mp_malloc_sync 00:03:48.746 EAL: No shared files mode enabled, IPC is disabled 00:03:48.746 EAL: Heap on socket 0 was expanded by 2MB 00:03:48.746 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:48.746 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:48.746 EAL: Mem event callback 'spdk:(nil)' registered 00:03:48.746 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:48.746 00:03:48.746 00:03:48.746 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.746 http://cunit.sourceforge.net/ 00:03:48.746 00:03:48.746 00:03:48.746 Suite: components_suite 00:03:49.004 Test: vtophys_malloc_test ...passed 00:03:49.004 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:49.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.004 EAL: Restoring previous memory policy: 4 00:03:49.004 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.004 EAL: request: mp_malloc_sync 00:03:49.004 EAL: No shared files mode enabled, IPC is disabled 00:03:49.005 EAL: Heap on socket 0 was expanded by 4MB 00:03:49.005 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.005 EAL: request: mp_malloc_sync 00:03:49.005 EAL: No shared files mode enabled, IPC is disabled 00:03:49.005 EAL: Heap on socket 0 was shrunk by 4MB 00:03:49.005 EAL: Trying to obtain current memory policy. 00:03:49.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.005 EAL: Restoring previous memory policy: 4 00:03:49.005 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.005 EAL: request: mp_malloc_sync 00:03:49.005 EAL: No shared files mode enabled, IPC is disabled 00:03:49.005 EAL: Heap on socket 0 was expanded by 6MB 00:03:49.005 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.005 EAL: request: mp_malloc_sync 00:03:49.005 EAL: No shared files mode enabled, IPC is disabled 00:03:49.005 EAL: Heap on socket 0 was shrunk by 6MB 00:03:49.005 EAL: Trying to obtain current memory policy. 00:03:49.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.005 EAL: Restoring previous memory policy: 4 00:03:49.005 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.005 EAL: request: mp_malloc_sync 00:03:49.005 EAL: No shared files mode enabled, IPC is disabled 00:03:49.005 EAL: Heap on socket 0 was expanded by 10MB 00:03:49.005 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.005 EAL: request: mp_malloc_sync 00:03:49.005 EAL: No shared files mode enabled, IPC is disabled 00:03:49.005 EAL: Heap on socket 0 was shrunk by 10MB 00:03:49.262 EAL: Trying to obtain current memory policy. 00:03:49.262 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.262 EAL: Restoring previous memory policy: 4 00:03:49.262 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.262 EAL: request: mp_malloc_sync 00:03:49.262 EAL: No shared files mode enabled, IPC is disabled 00:03:49.262 EAL: Heap on socket 0 was expanded by 18MB 00:03:49.262 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.262 EAL: request: mp_malloc_sync 00:03:49.262 EAL: No shared files mode enabled, IPC is disabled 00:03:49.262 EAL: Heap on socket 0 was shrunk by 18MB 00:03:49.262 EAL: Trying to obtain current memory policy. 00:03:49.262 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.262 EAL: Restoring previous memory policy: 4 00:03:49.262 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.262 EAL: request: mp_malloc_sync 00:03:49.262 EAL: No shared files mode enabled, IPC is disabled 00:03:49.262 EAL: Heap on socket 0 was expanded by 34MB 00:03:49.262 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.262 EAL: request: mp_malloc_sync 00:03:49.262 EAL: No shared files mode enabled, IPC is disabled 00:03:49.262 EAL: Heap on socket 0 was shrunk by 34MB 00:03:49.262 EAL: Trying to obtain current memory policy. 00:03:49.262 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.262 EAL: Restoring previous memory policy: 4 00:03:49.262 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.262 EAL: request: mp_malloc_sync 00:03:49.262 EAL: No shared files mode enabled, IPC is disabled 00:03:49.262 EAL: Heap on socket 0 was expanded by 66MB 00:03:49.262 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.262 EAL: request: mp_malloc_sync 00:03:49.262 EAL: No shared files mode enabled, IPC is disabled 00:03:49.262 EAL: Heap on socket 0 was shrunk by 66MB 00:03:49.521 EAL: Trying to obtain current memory policy. 00:03:49.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.521 EAL: Restoring previous memory policy: 4 00:03:49.521 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.521 EAL: request: mp_malloc_sync 00:03:49.521 EAL: No shared files mode enabled, IPC is disabled 00:03:49.521 EAL: Heap on socket 0 was expanded by 130MB 00:03:49.521 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.780 EAL: request: mp_malloc_sync 00:03:49.780 EAL: No shared files mode enabled, IPC is disabled 00:03:49.780 EAL: Heap on socket 0 was shrunk by 130MB 00:03:49.780 EAL: Trying to obtain current memory policy. 00:03:49.780 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.780 EAL: Restoring previous memory policy: 4 00:03:49.780 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.780 EAL: request: mp_malloc_sync 00:03:49.780 EAL: No shared files mode enabled, IPC is disabled 00:03:49.780 EAL: Heap on socket 0 was expanded by 258MB 00:03:50.347 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.347 EAL: request: mp_malloc_sync 00:03:50.347 EAL: No shared files mode enabled, IPC is disabled 00:03:50.347 EAL: Heap on socket 0 was shrunk by 258MB 00:03:50.606 EAL: Trying to obtain current memory policy. 00:03:50.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.606 EAL: Restoring previous memory policy: 4 00:03:50.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.606 EAL: request: mp_malloc_sync 00:03:50.606 EAL: No shared files mode enabled, IPC is disabled 00:03:50.606 EAL: Heap on socket 0 was expanded by 514MB 00:03:51.541 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.541 EAL: request: mp_malloc_sync 00:03:51.541 EAL: No shared files mode enabled, IPC is disabled 00:03:51.541 EAL: Heap on socket 0 was shrunk by 514MB 00:03:52.108 EAL: Trying to obtain current memory policy. 00:03:52.108 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.367 EAL: Restoring previous memory policy: 4 00:03:52.367 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.367 EAL: request: mp_malloc_sync 00:03:52.367 EAL: No shared files mode enabled, IPC is disabled 00:03:52.367 EAL: Heap on socket 0 was expanded by 1026MB 00:03:53.745 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.745 EAL: request: mp_malloc_sync 00:03:53.745 EAL: No shared files mode enabled, IPC is disabled 00:03:53.745 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:55.123 passed 00:03:55.124 00:03:55.124 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.124 suites 1 1 n/a 0 0 00:03:55.124 tests 2 2 2 0 0 00:03:55.124 asserts 5712 5712 5712 0 n/a 00:03:55.124 00:03:55.124 Elapsed time = 6.251 seconds 00:03:55.124 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.124 EAL: request: mp_malloc_sync 00:03:55.124 EAL: No shared files mode enabled, IPC is disabled 00:03:55.124 EAL: Heap on socket 0 was shrunk by 2MB 00:03:55.124 EAL: No shared files mode enabled, IPC is disabled 00:03:55.124 EAL: No shared files mode enabled, IPC is disabled 00:03:55.124 EAL: No shared files mode enabled, IPC is disabled 00:03:55.124 ************************************ 00:03:55.124 END TEST env_vtophys 00:03:55.124 ************************************ 00:03:55.124 00:03:55.124 real 0m6.591s 00:03:55.124 user 0m5.469s 00:03:55.124 sys 0m0.961s 00:03:55.124 13:16:43 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.124 13:16:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:55.124 13:16:43 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:55.124 13:16:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.124 13:16:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.124 13:16:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.124 ************************************ 00:03:55.124 START TEST env_pci 00:03:55.124 ************************************ 00:03:55.124 13:16:43 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:55.124 00:03:55.124 00:03:55.124 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.124 http://cunit.sourceforge.net/ 00:03:55.124 00:03:55.124 00:03:55.124 Suite: pci 00:03:55.124 Test: pci_hook ...[2024-11-26 13:16:43.523760] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56466 has claimed it 00:03:55.124 passed 00:03:55.124 00:03:55.124 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.124 suites 1 1 n/a 0 0 00:03:55.124 tests 1 1 1 0 0 00:03:55.124 asserts 25 25 25 0 n/a 00:03:55.124 00:03:55.124 Elapsed time = 0.006 seconds 00:03:55.124 EAL: Cannot find device (10000:00:01.0) 00:03:55.124 EAL: Failed to attach device on primary process 00:03:55.124 ************************************ 00:03:55.124 END TEST env_pci 00:03:55.124 ************************************ 00:03:55.124 00:03:55.124 real 0m0.070s 00:03:55.124 user 0m0.033s 00:03:55.124 sys 0m0.036s 00:03:55.124 13:16:43 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.124 13:16:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:55.124 13:16:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:55.124 13:16:43 env -- env/env.sh@15 -- # uname 00:03:55.124 13:16:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:55.124 13:16:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:55.124 13:16:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.124 13:16:43 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:55.124 13:16:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.124 13:16:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.124 ************************************ 00:03:55.124 START TEST env_dpdk_post_init 00:03:55.124 ************************************ 00:03:55.124 13:16:43 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.124 EAL: Detected CPU lcores: 10 00:03:55.124 EAL: Detected NUMA nodes: 1 00:03:55.124 EAL: Detected shared linkage of DPDK 00:03:55.383 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.383 EAL: Selected IOVA mode 'PA' 00:03:55.383 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.383 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:55.383 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:55.383 Starting DPDK initialization... 00:03:55.383 Starting SPDK post initialization... 00:03:55.383 SPDK NVMe probe 00:03:55.383 Attaching to 0000:00:10.0 00:03:55.383 Attaching to 0000:00:11.0 00:03:55.383 Attached to 0000:00:10.0 00:03:55.383 Attached to 0000:00:11.0 00:03:55.383 Cleaning up... 00:03:55.383 00:03:55.383 real 0m0.304s 00:03:55.383 user 0m0.115s 00:03:55.383 sys 0m0.088s 00:03:55.383 ************************************ 00:03:55.383 END TEST env_dpdk_post_init 00:03:55.383 ************************************ 00:03:55.383 13:16:43 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.383 13:16:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:55.642 13:16:43 env -- env/env.sh@26 -- # uname 00:03:55.642 13:16:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:55.642 13:16:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.642 13:16:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.642 13:16:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.642 13:16:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.642 ************************************ 00:03:55.642 START TEST env_mem_callbacks 00:03:55.642 ************************************ 00:03:55.642 13:16:43 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.642 EAL: Detected CPU lcores: 10 00:03:55.642 EAL: Detected NUMA nodes: 1 00:03:55.642 EAL: Detected shared linkage of DPDK 00:03:55.642 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.642 EAL: Selected IOVA mode 'PA' 00:03:55.642 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.642 00:03:55.642 00:03:55.642 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.642 http://cunit.sourceforge.net/ 00:03:55.642 00:03:55.642 00:03:55.642 Suite: memory 00:03:55.642 Test: test ... 00:03:55.642 register 0x200000200000 2097152 00:03:55.642 malloc 3145728 00:03:55.642 register 0x200000400000 4194304 00:03:55.642 buf 0x2000004fffc0 len 3145728 PASSED 00:03:55.642 malloc 64 00:03:55.642 buf 0x2000004ffec0 len 64 PASSED 00:03:55.642 malloc 4194304 00:03:55.642 register 0x200000800000 6291456 00:03:55.642 buf 0x2000009fffc0 len 4194304 PASSED 00:03:55.642 free 0x2000004fffc0 3145728 00:03:55.642 free 0x2000004ffec0 64 00:03:55.642 unregister 0x200000400000 4194304 PASSED 00:03:55.642 free 0x2000009fffc0 4194304 00:03:55.642 unregister 0x200000800000 6291456 PASSED 00:03:55.642 malloc 8388608 00:03:55.642 register 0x200000400000 10485760 00:03:55.901 buf 0x2000005fffc0 len 8388608 PASSED 00:03:55.901 free 0x2000005fffc0 8388608 00:03:55.901 unregister 0x200000400000 10485760 PASSED 00:03:55.901 passed 00:03:55.901 00:03:55.901 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.901 suites 1 1 n/a 0 0 00:03:55.901 tests 1 1 1 0 0 00:03:55.901 asserts 15 15 15 0 n/a 00:03:55.901 00:03:55.901 Elapsed time = 0.074 seconds 00:03:55.901 00:03:55.901 real 0m0.276s 00:03:55.901 user 0m0.112s 00:03:55.901 sys 0m0.060s 00:03:55.901 ************************************ 00:03:55.901 END TEST env_mem_callbacks 00:03:55.901 ************************************ 00:03:55.901 13:16:44 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.901 13:16:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:55.901 ************************************ 00:03:55.901 END TEST env 00:03:55.901 ************************************ 00:03:55.901 00:03:55.901 real 0m8.063s 00:03:55.901 user 0m6.269s 00:03:55.901 sys 0m1.407s 00:03:55.901 13:16:44 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.901 13:16:44 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.901 13:16:44 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:55.901 13:16:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.901 13:16:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.901 13:16:44 -- common/autotest_common.sh@10 -- # set +x 00:03:55.901 ************************************ 00:03:55.901 START TEST rpc 00:03:55.901 ************************************ 00:03:55.901 13:16:44 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:55.901 * Looking for test storage... 00:03:55.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:55.901 13:16:44 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:55.901 13:16:44 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:55.901 13:16:44 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:56.160 13:16:44 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:56.160 13:16:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.160 13:16:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.160 13:16:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.160 13:16:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.160 13:16:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.160 13:16:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.160 13:16:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.160 13:16:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.160 13:16:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.160 13:16:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.160 13:16:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.160 13:16:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:56.160 13:16:44 rpc -- scripts/common.sh@345 -- # : 1 00:03:56.160 13:16:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.160 13:16:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.160 13:16:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:56.160 13:16:44 rpc -- scripts/common.sh@353 -- # local d=1 00:03:56.160 13:16:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.160 13:16:44 rpc -- scripts/common.sh@355 -- # echo 1 00:03:56.160 13:16:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.160 13:16:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:56.160 13:16:44 rpc -- scripts/common.sh@353 -- # local d=2 00:03:56.160 13:16:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.160 13:16:44 rpc -- scripts/common.sh@355 -- # echo 2 00:03:56.160 13:16:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.160 13:16:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.160 13:16:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.160 13:16:44 rpc -- scripts/common.sh@368 -- # return 0 00:03:56.160 13:16:44 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.160 13:16:44 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:56.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.160 --rc genhtml_branch_coverage=1 00:03:56.160 --rc genhtml_function_coverage=1 00:03:56.160 --rc genhtml_legend=1 00:03:56.160 --rc geninfo_all_blocks=1 00:03:56.160 --rc geninfo_unexecuted_blocks=1 00:03:56.160 00:03:56.160 ' 00:03:56.160 13:16:44 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:56.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.160 --rc genhtml_branch_coverage=1 00:03:56.160 --rc genhtml_function_coverage=1 00:03:56.160 --rc genhtml_legend=1 00:03:56.160 --rc geninfo_all_blocks=1 00:03:56.160 --rc geninfo_unexecuted_blocks=1 00:03:56.160 00:03:56.160 ' 00:03:56.160 13:16:44 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:56.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.160 --rc genhtml_branch_coverage=1 00:03:56.160 --rc genhtml_function_coverage=1 00:03:56.160 --rc genhtml_legend=1 00:03:56.160 --rc geninfo_all_blocks=1 00:03:56.160 --rc geninfo_unexecuted_blocks=1 00:03:56.160 00:03:56.160 ' 00:03:56.160 13:16:44 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:56.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.160 --rc genhtml_branch_coverage=1 00:03:56.160 --rc genhtml_function_coverage=1 00:03:56.160 --rc genhtml_legend=1 00:03:56.160 --rc geninfo_all_blocks=1 00:03:56.160 --rc geninfo_unexecuted_blocks=1 00:03:56.160 00:03:56.160 ' 00:03:56.160 13:16:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56593 00:03:56.160 13:16:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.160 13:16:44 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:56.160 13:16:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56593 00:03:56.160 13:16:44 rpc -- common/autotest_common.sh@835 -- # '[' -z 56593 ']' 00:03:56.160 13:16:44 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.160 13:16:44 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:56.160 13:16:44 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.160 13:16:44 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:56.160 13:16:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.160 [2024-11-26 13:16:44.677852] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:03:56.160 [2024-11-26 13:16:44.678321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56593 ] 00:03:56.419 [2024-11-26 13:16:44.869335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.678 [2024-11-26 13:16:45.016568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:56.678 [2024-11-26 13:16:45.016893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56593' to capture a snapshot of events at runtime. 00:03:56.679 [2024-11-26 13:16:45.016947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:56.679 [2024-11-26 13:16:45.016983] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:56.679 [2024-11-26 13:16:45.017007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56593 for offline analysis/debug. 00:03:56.679 [2024-11-26 13:16:45.018744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.248 13:16:45 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:57.248 13:16:45 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:57.248 13:16:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:57.248 13:16:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:57.248 13:16:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:57.248 13:16:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:57.248 13:16:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.248 13:16:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.248 13:16:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.248 ************************************ 00:03:57.248 START TEST rpc_integrity 00:03:57.248 ************************************ 00:03:57.248 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:57.248 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:57.248 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.248 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.248 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.248 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:57.248 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:57.508 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:57.508 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:57.508 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.508 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.508 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.508 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:57.508 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:57.508 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.508 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.508 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.508 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:57.508 { 00:03:57.508 "name": "Malloc0", 00:03:57.508 "aliases": [ 00:03:57.508 "4c1d070c-704b-4bea-841e-723ad8eb9db0" 00:03:57.508 ], 00:03:57.508 "product_name": "Malloc disk", 00:03:57.508 "block_size": 512, 00:03:57.508 "num_blocks": 16384, 00:03:57.508 "uuid": "4c1d070c-704b-4bea-841e-723ad8eb9db0", 00:03:57.508 "assigned_rate_limits": { 00:03:57.508 "rw_ios_per_sec": 0, 00:03:57.508 "rw_mbytes_per_sec": 0, 00:03:57.508 "r_mbytes_per_sec": 0, 00:03:57.508 "w_mbytes_per_sec": 0 00:03:57.508 }, 00:03:57.508 "claimed": false, 00:03:57.508 "zoned": false, 00:03:57.508 "supported_io_types": { 00:03:57.508 "read": true, 00:03:57.508 "write": true, 00:03:57.508 "unmap": true, 00:03:57.508 "flush": true, 00:03:57.508 "reset": true, 00:03:57.508 "nvme_admin": false, 00:03:57.508 "nvme_io": false, 00:03:57.508 "nvme_io_md": false, 00:03:57.508 "write_zeroes": true, 00:03:57.508 "zcopy": true, 00:03:57.508 "get_zone_info": false, 00:03:57.508 "zone_management": false, 00:03:57.508 "zone_append": false, 00:03:57.508 "compare": false, 00:03:57.508 "compare_and_write": false, 00:03:57.508 "abort": true, 00:03:57.508 "seek_hole": false, 00:03:57.508 "seek_data": false, 00:03:57.508 "copy": true, 00:03:57.508 "nvme_iov_md": false 00:03:57.508 }, 00:03:57.508 "memory_domains": [ 00:03:57.508 { 00:03:57.508 "dma_device_id": "system", 00:03:57.508 "dma_device_type": 1 00:03:57.508 }, 00:03:57.508 { 00:03:57.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.508 "dma_device_type": 2 00:03:57.508 } 00:03:57.508 ], 00:03:57.508 "driver_specific": {} 00:03:57.508 } 00:03:57.508 ]' 00:03:57.508 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:57.508 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:57.508 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:57.508 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.508 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.508 [2024-11-26 13:16:45.945976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:57.508 [2024-11-26 13:16:45.946065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:57.508 [2024-11-26 13:16:45.946109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:03:57.508 [2024-11-26 13:16:45.946143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:57.508 [2024-11-26 13:16:45.949581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:57.508 [2024-11-26 13:16:45.949653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:57.508 Passthru0 00:03:57.508 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.508 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:57.508 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.508 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.508 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.508 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:57.508 { 00:03:57.508 "name": "Malloc0", 00:03:57.508 "aliases": [ 00:03:57.508 "4c1d070c-704b-4bea-841e-723ad8eb9db0" 00:03:57.508 ], 00:03:57.508 "product_name": "Malloc disk", 00:03:57.508 "block_size": 512, 00:03:57.508 "num_blocks": 16384, 00:03:57.508 "uuid": "4c1d070c-704b-4bea-841e-723ad8eb9db0", 00:03:57.508 "assigned_rate_limits": { 00:03:57.508 "rw_ios_per_sec": 0, 00:03:57.508 "rw_mbytes_per_sec": 0, 00:03:57.508 "r_mbytes_per_sec": 0, 00:03:57.508 "w_mbytes_per_sec": 0 00:03:57.508 }, 00:03:57.508 "claimed": true, 00:03:57.508 "claim_type": "exclusive_write", 00:03:57.508 "zoned": false, 00:03:57.508 "supported_io_types": { 00:03:57.508 "read": true, 00:03:57.508 "write": true, 00:03:57.508 "unmap": true, 00:03:57.508 "flush": true, 00:03:57.508 "reset": true, 00:03:57.508 "nvme_admin": false, 00:03:57.508 "nvme_io": false, 00:03:57.508 "nvme_io_md": false, 00:03:57.508 "write_zeroes": true, 00:03:57.508 "zcopy": true, 00:03:57.508 "get_zone_info": false, 00:03:57.508 "zone_management": false, 00:03:57.508 "zone_append": false, 00:03:57.508 "compare": false, 00:03:57.508 "compare_and_write": false, 00:03:57.508 "abort": true, 00:03:57.508 "seek_hole": false, 00:03:57.508 "seek_data": false, 00:03:57.508 "copy": true, 00:03:57.508 "nvme_iov_md": false 00:03:57.508 }, 00:03:57.508 "memory_domains": [ 00:03:57.508 { 00:03:57.508 "dma_device_id": "system", 00:03:57.508 "dma_device_type": 1 00:03:57.508 }, 00:03:57.508 { 00:03:57.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.508 "dma_device_type": 2 00:03:57.508 } 00:03:57.508 ], 00:03:57.508 "driver_specific": {} 00:03:57.508 }, 00:03:57.508 { 00:03:57.508 "name": "Passthru0", 00:03:57.508 "aliases": [ 00:03:57.508 "014cc3ca-3b7e-5198-93c2-c17001ff7719" 00:03:57.508 ], 00:03:57.508 "product_name": "passthru", 00:03:57.508 "block_size": 512, 00:03:57.508 "num_blocks": 16384, 00:03:57.508 "uuid": "014cc3ca-3b7e-5198-93c2-c17001ff7719", 00:03:57.508 "assigned_rate_limits": { 00:03:57.508 "rw_ios_per_sec": 0, 00:03:57.508 "rw_mbytes_per_sec": 0, 00:03:57.508 "r_mbytes_per_sec": 0, 00:03:57.508 "w_mbytes_per_sec": 0 00:03:57.508 }, 00:03:57.508 "claimed": false, 00:03:57.508 "zoned": false, 00:03:57.508 "supported_io_types": { 00:03:57.508 "read": true, 00:03:57.508 "write": true, 00:03:57.508 "unmap": true, 00:03:57.508 "flush": true, 00:03:57.508 "reset": true, 00:03:57.508 "nvme_admin": false, 00:03:57.508 "nvme_io": false, 00:03:57.508 "nvme_io_md": false, 00:03:57.508 "write_zeroes": true, 00:03:57.508 "zcopy": true, 00:03:57.508 "get_zone_info": false, 00:03:57.508 "zone_management": false, 00:03:57.508 "zone_append": false, 00:03:57.509 "compare": false, 00:03:57.509 "compare_and_write": false, 00:03:57.509 "abort": true, 00:03:57.509 "seek_hole": false, 00:03:57.509 "seek_data": false, 00:03:57.509 "copy": true, 00:03:57.509 "nvme_iov_md": false 00:03:57.509 }, 00:03:57.509 "memory_domains": [ 00:03:57.509 { 00:03:57.509 "dma_device_id": "system", 00:03:57.509 "dma_device_type": 1 00:03:57.509 }, 00:03:57.509 { 00:03:57.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.509 "dma_device_type": 2 00:03:57.509 } 00:03:57.509 ], 00:03:57.509 "driver_specific": { 00:03:57.509 "passthru": { 00:03:57.509 "name": "Passthru0", 00:03:57.509 "base_bdev_name": "Malloc0" 00:03:57.509 } 00:03:57.509 } 00:03:57.509 } 00:03:57.509 ]' 00:03:57.509 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:57.509 13:16:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:57.509 13:16:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:57.509 13:16:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.509 13:16:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.509 13:16:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.509 13:16:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:57.509 13:16:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.509 13:16:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.768 13:16:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.768 13:16:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:57.768 13:16:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.768 13:16:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.768 13:16:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.768 13:16:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:57.768 13:16:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:57.768 ************************************ 00:03:57.768 END TEST rpc_integrity 00:03:57.768 ************************************ 00:03:57.768 13:16:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:57.768 00:03:57.768 real 0m0.372s 00:03:57.768 user 0m0.232s 00:03:57.768 sys 0m0.038s 00:03:57.768 13:16:46 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.768 13:16:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.768 13:16:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:57.768 13:16:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.768 13:16:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.768 13:16:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.768 ************************************ 00:03:57.768 START TEST rpc_plugins 00:03:57.768 ************************************ 00:03:57.768 13:16:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:57.768 13:16:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:57.768 13:16:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.768 13:16:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.768 13:16:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.768 13:16:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:57.768 13:16:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:57.768 13:16:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.768 13:16:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.768 13:16:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.768 13:16:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:57.768 { 00:03:57.768 "name": "Malloc1", 00:03:57.768 "aliases": [ 00:03:57.768 "9d1aa629-f78d-45d1-938a-635f625cfe67" 00:03:57.768 ], 00:03:57.768 "product_name": "Malloc disk", 00:03:57.768 "block_size": 4096, 00:03:57.768 "num_blocks": 256, 00:03:57.768 "uuid": "9d1aa629-f78d-45d1-938a-635f625cfe67", 00:03:57.768 "assigned_rate_limits": { 00:03:57.768 "rw_ios_per_sec": 0, 00:03:57.768 "rw_mbytes_per_sec": 0, 00:03:57.768 "r_mbytes_per_sec": 0, 00:03:57.769 "w_mbytes_per_sec": 0 00:03:57.769 }, 00:03:57.769 "claimed": false, 00:03:57.769 "zoned": false, 00:03:57.769 "supported_io_types": { 00:03:57.769 "read": true, 00:03:57.769 "write": true, 00:03:57.769 "unmap": true, 00:03:57.769 "flush": true, 00:03:57.769 "reset": true, 00:03:57.769 "nvme_admin": false, 00:03:57.769 "nvme_io": false, 00:03:57.769 "nvme_io_md": false, 00:03:57.769 "write_zeroes": true, 00:03:57.769 "zcopy": true, 00:03:57.769 "get_zone_info": false, 00:03:57.769 "zone_management": false, 00:03:57.769 "zone_append": false, 00:03:57.769 "compare": false, 00:03:57.769 "compare_and_write": false, 00:03:57.769 "abort": true, 00:03:57.769 "seek_hole": false, 00:03:57.769 "seek_data": false, 00:03:57.769 "copy": true, 00:03:57.769 "nvme_iov_md": false 00:03:57.769 }, 00:03:57.769 "memory_domains": [ 00:03:57.769 { 00:03:57.769 "dma_device_id": "system", 00:03:57.769 "dma_device_type": 1 00:03:57.769 }, 00:03:57.769 { 00:03:57.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.769 "dma_device_type": 2 00:03:57.769 } 00:03:57.769 ], 00:03:57.769 "driver_specific": {} 00:03:57.769 } 00:03:57.769 ]' 00:03:57.769 13:16:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:57.769 13:16:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:57.769 13:16:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:57.769 13:16:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.769 13:16:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.769 13:16:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.769 13:16:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:57.769 13:16:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.769 13:16:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.769 13:16:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.769 13:16:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:57.769 13:16:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:58.028 ************************************ 00:03:58.028 END TEST rpc_plugins 00:03:58.028 ************************************ 00:03:58.028 13:16:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:58.028 00:03:58.028 real 0m0.162s 00:03:58.028 user 0m0.101s 00:03:58.028 sys 0m0.016s 00:03:58.028 13:16:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.028 13:16:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.028 13:16:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:58.028 13:16:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.028 13:16:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.028 13:16:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.028 ************************************ 00:03:58.028 START TEST rpc_trace_cmd_test 00:03:58.028 ************************************ 00:03:58.028 13:16:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:58.028 13:16:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:58.028 13:16:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:58.028 13:16:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.028 13:16:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:58.028 13:16:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.028 13:16:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:58.028 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56593", 00:03:58.028 "tpoint_group_mask": "0x8", 00:03:58.028 "iscsi_conn": { 00:03:58.028 "mask": "0x2", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 }, 00:03:58.028 "scsi": { 00:03:58.028 "mask": "0x4", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 }, 00:03:58.028 "bdev": { 00:03:58.028 "mask": "0x8", 00:03:58.028 "tpoint_mask": "0xffffffffffffffff" 00:03:58.028 }, 00:03:58.028 "nvmf_rdma": { 00:03:58.028 "mask": "0x10", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 }, 00:03:58.028 "nvmf_tcp": { 00:03:58.028 "mask": "0x20", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 }, 00:03:58.028 "ftl": { 00:03:58.028 "mask": "0x40", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 }, 00:03:58.028 "blobfs": { 00:03:58.028 "mask": "0x80", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 }, 00:03:58.028 "dsa": { 00:03:58.028 "mask": "0x200", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 }, 00:03:58.028 "thread": { 00:03:58.028 "mask": "0x400", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 }, 00:03:58.028 "nvme_pcie": { 00:03:58.028 "mask": "0x800", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 }, 00:03:58.028 "iaa": { 00:03:58.028 "mask": "0x1000", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 }, 00:03:58.028 "nvme_tcp": { 00:03:58.028 "mask": "0x2000", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 }, 00:03:58.028 "bdev_nvme": { 00:03:58.028 "mask": "0x4000", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 }, 00:03:58.028 "sock": { 00:03:58.028 "mask": "0x8000", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 }, 00:03:58.028 "blob": { 00:03:58.028 "mask": "0x10000", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 }, 00:03:58.028 "bdev_raid": { 00:03:58.028 "mask": "0x20000", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 }, 00:03:58.028 "scheduler": { 00:03:58.028 "mask": "0x40000", 00:03:58.028 "tpoint_mask": "0x0" 00:03:58.028 } 00:03:58.028 }' 00:03:58.028 13:16:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:58.028 13:16:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:58.028 13:16:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:58.028 13:16:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:58.028 13:16:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:58.028 13:16:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:58.028 13:16:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:58.287 13:16:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:58.287 13:16:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:58.287 ************************************ 00:03:58.287 END TEST rpc_trace_cmd_test 00:03:58.287 ************************************ 00:03:58.287 13:16:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:58.287 00:03:58.287 real 0m0.276s 00:03:58.287 user 0m0.230s 00:03:58.287 sys 0m0.033s 00:03:58.287 13:16:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.287 13:16:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:58.287 13:16:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:58.287 13:16:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:58.287 13:16:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:58.287 13:16:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.287 13:16:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.287 13:16:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.287 ************************************ 00:03:58.287 START TEST rpc_daemon_integrity 00:03:58.287 ************************************ 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:58.287 { 00:03:58.287 "name": "Malloc2", 00:03:58.287 "aliases": [ 00:03:58.287 "f6499e49-07ea-4b53-a295-3a9149a25a57" 00:03:58.287 ], 00:03:58.287 "product_name": "Malloc disk", 00:03:58.287 "block_size": 512, 00:03:58.287 "num_blocks": 16384, 00:03:58.287 "uuid": "f6499e49-07ea-4b53-a295-3a9149a25a57", 00:03:58.287 "assigned_rate_limits": { 00:03:58.287 "rw_ios_per_sec": 0, 00:03:58.287 "rw_mbytes_per_sec": 0, 00:03:58.287 "r_mbytes_per_sec": 0, 00:03:58.287 "w_mbytes_per_sec": 0 00:03:58.287 }, 00:03:58.287 "claimed": false, 00:03:58.287 "zoned": false, 00:03:58.287 "supported_io_types": { 00:03:58.287 "read": true, 00:03:58.287 "write": true, 00:03:58.287 "unmap": true, 00:03:58.287 "flush": true, 00:03:58.287 "reset": true, 00:03:58.287 "nvme_admin": false, 00:03:58.287 "nvme_io": false, 00:03:58.287 "nvme_io_md": false, 00:03:58.287 "write_zeroes": true, 00:03:58.287 "zcopy": true, 00:03:58.287 "get_zone_info": false, 00:03:58.287 "zone_management": false, 00:03:58.287 "zone_append": false, 00:03:58.287 "compare": false, 00:03:58.287 "compare_and_write": false, 00:03:58.287 "abort": true, 00:03:58.287 "seek_hole": false, 00:03:58.287 "seek_data": false, 00:03:58.287 "copy": true, 00:03:58.287 "nvme_iov_md": false 00:03:58.287 }, 00:03:58.287 "memory_domains": [ 00:03:58.287 { 00:03:58.287 "dma_device_id": "system", 00:03:58.287 "dma_device_type": 1 00:03:58.287 }, 00:03:58.287 { 00:03:58.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.287 "dma_device_type": 2 00:03:58.287 } 00:03:58.287 ], 00:03:58.287 "driver_specific": {} 00:03:58.287 } 00:03:58.287 ]' 00:03:58.287 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.547 [2024-11-26 13:16:46.894397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:58.547 [2024-11-26 13:16:46.894476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:58.547 [2024-11-26 13:16:46.894504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:03:58.547 [2024-11-26 13:16:46.894520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:58.547 [2024-11-26 13:16:46.897409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:58.547 [2024-11-26 13:16:46.897474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:58.547 Passthru0 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:58.547 { 00:03:58.547 "name": "Malloc2", 00:03:58.547 "aliases": [ 00:03:58.547 "f6499e49-07ea-4b53-a295-3a9149a25a57" 00:03:58.547 ], 00:03:58.547 "product_name": "Malloc disk", 00:03:58.547 "block_size": 512, 00:03:58.547 "num_blocks": 16384, 00:03:58.547 "uuid": "f6499e49-07ea-4b53-a295-3a9149a25a57", 00:03:58.547 "assigned_rate_limits": { 00:03:58.547 "rw_ios_per_sec": 0, 00:03:58.547 "rw_mbytes_per_sec": 0, 00:03:58.547 "r_mbytes_per_sec": 0, 00:03:58.547 "w_mbytes_per_sec": 0 00:03:58.547 }, 00:03:58.547 "claimed": true, 00:03:58.547 "claim_type": "exclusive_write", 00:03:58.547 "zoned": false, 00:03:58.547 "supported_io_types": { 00:03:58.547 "read": true, 00:03:58.547 "write": true, 00:03:58.547 "unmap": true, 00:03:58.547 "flush": true, 00:03:58.547 "reset": true, 00:03:58.547 "nvme_admin": false, 00:03:58.547 "nvme_io": false, 00:03:58.547 "nvme_io_md": false, 00:03:58.547 "write_zeroes": true, 00:03:58.547 "zcopy": true, 00:03:58.547 "get_zone_info": false, 00:03:58.547 "zone_management": false, 00:03:58.547 "zone_append": false, 00:03:58.547 "compare": false, 00:03:58.547 "compare_and_write": false, 00:03:58.547 "abort": true, 00:03:58.547 "seek_hole": false, 00:03:58.547 "seek_data": false, 00:03:58.547 "copy": true, 00:03:58.547 "nvme_iov_md": false 00:03:58.547 }, 00:03:58.547 "memory_domains": [ 00:03:58.547 { 00:03:58.547 "dma_device_id": "system", 00:03:58.547 "dma_device_type": 1 00:03:58.547 }, 00:03:58.547 { 00:03:58.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.547 "dma_device_type": 2 00:03:58.547 } 00:03:58.547 ], 00:03:58.547 "driver_specific": {} 00:03:58.547 }, 00:03:58.547 { 00:03:58.547 "name": "Passthru0", 00:03:58.547 "aliases": [ 00:03:58.547 "9cd83e45-d4a0-54d1-8d95-d8b7b9aabe06" 00:03:58.547 ], 00:03:58.547 "product_name": "passthru", 00:03:58.547 "block_size": 512, 00:03:58.547 "num_blocks": 16384, 00:03:58.547 "uuid": "9cd83e45-d4a0-54d1-8d95-d8b7b9aabe06", 00:03:58.547 "assigned_rate_limits": { 00:03:58.547 "rw_ios_per_sec": 0, 00:03:58.547 "rw_mbytes_per_sec": 0, 00:03:58.547 "r_mbytes_per_sec": 0, 00:03:58.547 "w_mbytes_per_sec": 0 00:03:58.547 }, 00:03:58.547 "claimed": false, 00:03:58.547 "zoned": false, 00:03:58.547 "supported_io_types": { 00:03:58.547 "read": true, 00:03:58.547 "write": true, 00:03:58.547 "unmap": true, 00:03:58.547 "flush": true, 00:03:58.547 "reset": true, 00:03:58.547 "nvme_admin": false, 00:03:58.547 "nvme_io": false, 00:03:58.547 "nvme_io_md": false, 00:03:58.547 "write_zeroes": true, 00:03:58.547 "zcopy": true, 00:03:58.547 "get_zone_info": false, 00:03:58.547 "zone_management": false, 00:03:58.547 "zone_append": false, 00:03:58.547 "compare": false, 00:03:58.547 "compare_and_write": false, 00:03:58.547 "abort": true, 00:03:58.547 "seek_hole": false, 00:03:58.547 "seek_data": false, 00:03:58.547 "copy": true, 00:03:58.547 "nvme_iov_md": false 00:03:58.547 }, 00:03:58.547 "memory_domains": [ 00:03:58.547 { 00:03:58.547 "dma_device_id": "system", 00:03:58.547 "dma_device_type": 1 00:03:58.547 }, 00:03:58.547 { 00:03:58.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.547 "dma_device_type": 2 00:03:58.547 } 00:03:58.547 ], 00:03:58.547 "driver_specific": { 00:03:58.547 "passthru": { 00:03:58.547 "name": "Passthru0", 00:03:58.547 "base_bdev_name": "Malloc2" 00:03:58.547 } 00:03:58.547 } 00:03:58.547 } 00:03:58.547 ]' 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.547 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.547 13:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.547 13:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:58.547 13:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.547 13:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.547 13:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.547 13:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:58.547 13:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:58.547 ************************************ 00:03:58.547 END TEST rpc_daemon_integrity 00:03:58.547 ************************************ 00:03:58.547 13:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:58.547 00:03:58.547 real 0m0.337s 00:03:58.547 user 0m0.222s 00:03:58.547 sys 0m0.032s 00:03:58.547 13:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.548 13:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.806 13:16:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:58.806 13:16:47 rpc -- rpc/rpc.sh@84 -- # killprocess 56593 00:03:58.806 13:16:47 rpc -- common/autotest_common.sh@954 -- # '[' -z 56593 ']' 00:03:58.806 13:16:47 rpc -- common/autotest_common.sh@958 -- # kill -0 56593 00:03:58.806 13:16:47 rpc -- common/autotest_common.sh@959 -- # uname 00:03:58.806 13:16:47 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:58.806 13:16:47 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56593 00:03:58.807 13:16:47 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:58.807 13:16:47 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:58.807 killing process with pid 56593 00:03:58.807 13:16:47 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56593' 00:03:58.807 13:16:47 rpc -- common/autotest_common.sh@973 -- # kill 56593 00:03:58.807 13:16:47 rpc -- common/autotest_common.sh@978 -- # wait 56593 00:04:00.711 00:04:00.711 real 0m4.585s 00:04:00.712 user 0m5.352s 00:04:00.712 sys 0m0.854s 00:04:00.712 13:16:48 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.712 ************************************ 00:04:00.712 END TEST rpc 00:04:00.712 ************************************ 00:04:00.712 13:16:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.712 13:16:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:00.712 13:16:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.712 13:16:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.712 13:16:48 -- common/autotest_common.sh@10 -- # set +x 00:04:00.712 ************************************ 00:04:00.712 START TEST skip_rpc 00:04:00.712 ************************************ 00:04:00.712 13:16:48 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:00.712 * Looking for test storage... 00:04:00.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:00.712 13:16:49 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:00.712 13:16:49 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:00.712 13:16:49 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:00.712 13:16:49 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.712 13:16:49 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:00.712 13:16:49 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.712 13:16:49 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:00.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.712 --rc genhtml_branch_coverage=1 00:04:00.712 --rc genhtml_function_coverage=1 00:04:00.712 --rc genhtml_legend=1 00:04:00.712 --rc geninfo_all_blocks=1 00:04:00.712 --rc geninfo_unexecuted_blocks=1 00:04:00.712 00:04:00.712 ' 00:04:00.712 13:16:49 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:00.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.712 --rc genhtml_branch_coverage=1 00:04:00.712 --rc genhtml_function_coverage=1 00:04:00.712 --rc genhtml_legend=1 00:04:00.712 --rc geninfo_all_blocks=1 00:04:00.712 --rc geninfo_unexecuted_blocks=1 00:04:00.712 00:04:00.712 ' 00:04:00.712 13:16:49 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:00.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.712 --rc genhtml_branch_coverage=1 00:04:00.712 --rc genhtml_function_coverage=1 00:04:00.712 --rc genhtml_legend=1 00:04:00.712 --rc geninfo_all_blocks=1 00:04:00.712 --rc geninfo_unexecuted_blocks=1 00:04:00.712 00:04:00.712 ' 00:04:00.712 13:16:49 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:00.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.712 --rc genhtml_branch_coverage=1 00:04:00.712 --rc genhtml_function_coverage=1 00:04:00.712 --rc genhtml_legend=1 00:04:00.712 --rc geninfo_all_blocks=1 00:04:00.712 --rc geninfo_unexecuted_blocks=1 00:04:00.712 00:04:00.712 ' 00:04:00.712 13:16:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:00.712 13:16:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:00.712 13:16:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:00.712 13:16:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.712 13:16:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.712 13:16:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.712 ************************************ 00:04:00.712 START TEST skip_rpc 00:04:00.712 ************************************ 00:04:00.712 13:16:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:00.712 13:16:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56811 00:04:00.712 13:16:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.712 13:16:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:00.712 13:16:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:00.971 [2024-11-26 13:16:49.300880] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:00.971 [2024-11-26 13:16:49.301057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56811 ] 00:04:00.971 [2024-11-26 13:16:49.480688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.230 [2024-11-26 13:16:49.581780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56811 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56811 ']' 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56811 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56811 00:04:06.538 killing process with pid 56811 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56811' 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56811 00:04:06.538 13:16:54 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56811 00:04:07.477 ************************************ 00:04:07.477 END TEST skip_rpc 00:04:07.477 ************************************ 00:04:07.477 00:04:07.477 real 0m6.830s 00:04:07.477 user 0m6.318s 00:04:07.477 sys 0m0.412s 00:04:07.477 13:16:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.477 13:16:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.736 13:16:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:07.736 13:16:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.736 13:16:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.736 13:16:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.736 ************************************ 00:04:07.736 START TEST skip_rpc_with_json 00:04:07.736 ************************************ 00:04:07.736 13:16:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:07.736 13:16:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:07.736 13:16:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56915 00:04:07.736 13:16:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.736 13:16:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:07.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.736 13:16:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56915 00:04:07.736 13:16:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56915 ']' 00:04:07.736 13:16:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.736 13:16:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.736 13:16:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.736 13:16:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.736 13:16:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.736 [2024-11-26 13:16:56.178717] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:07.736 [2024-11-26 13:16:56.179200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56915 ] 00:04:07.995 [2024-11-26 13:16:56.362455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.995 [2024-11-26 13:16:56.462241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.932 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.932 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:08.932 13:16:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:08.932 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.932 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.932 [2024-11-26 13:16:57.185529] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:08.932 request: 00:04:08.932 { 00:04:08.932 "trtype": "tcp", 00:04:08.932 "method": "nvmf_get_transports", 00:04:08.932 "req_id": 1 00:04:08.932 } 00:04:08.932 Got JSON-RPC error response 00:04:08.932 response: 00:04:08.932 { 00:04:08.932 "code": -19, 00:04:08.932 "message": "No such device" 00:04:08.932 } 00:04:08.932 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:08.932 13:16:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:08.932 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.932 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.932 [2024-11-26 13:16:57.193700] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:08.932 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.932 13:16:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:08.932 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.932 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.932 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.932 13:16:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:08.932 { 00:04:08.932 "subsystems": [ 00:04:08.932 { 00:04:08.932 "subsystem": "fsdev", 00:04:08.932 "config": [ 00:04:08.932 { 00:04:08.932 "method": "fsdev_set_opts", 00:04:08.932 "params": { 00:04:08.932 "fsdev_io_pool_size": 65535, 00:04:08.932 "fsdev_io_cache_size": 256 00:04:08.932 } 00:04:08.932 } 00:04:08.932 ] 00:04:08.932 }, 00:04:08.932 { 00:04:08.932 "subsystem": "keyring", 00:04:08.932 "config": [] 00:04:08.932 }, 00:04:08.932 { 00:04:08.932 "subsystem": "iobuf", 00:04:08.932 "config": [ 00:04:08.932 { 00:04:08.932 "method": "iobuf_set_options", 00:04:08.932 "params": { 00:04:08.932 "small_pool_count": 8192, 00:04:08.932 "large_pool_count": 1024, 00:04:08.932 "small_bufsize": 8192, 00:04:08.932 "large_bufsize": 135168, 00:04:08.932 "enable_numa": false 00:04:08.932 } 00:04:08.932 } 00:04:08.932 ] 00:04:08.932 }, 00:04:08.932 { 00:04:08.932 "subsystem": "sock", 00:04:08.932 "config": [ 00:04:08.932 { 00:04:08.932 "method": "sock_set_default_impl", 00:04:08.932 "params": { 00:04:08.932 "impl_name": "posix" 00:04:08.932 } 00:04:08.932 }, 00:04:08.932 { 00:04:08.932 "method": "sock_impl_set_options", 00:04:08.932 "params": { 00:04:08.932 "impl_name": "ssl", 00:04:08.932 "recv_buf_size": 4096, 00:04:08.932 "send_buf_size": 4096, 00:04:08.932 "enable_recv_pipe": true, 00:04:08.932 "enable_quickack": false, 00:04:08.932 "enable_placement_id": 0, 00:04:08.932 "enable_zerocopy_send_server": true, 00:04:08.932 "enable_zerocopy_send_client": false, 00:04:08.932 "zerocopy_threshold": 0, 00:04:08.932 "tls_version": 0, 00:04:08.932 "enable_ktls": false 00:04:08.932 } 00:04:08.932 }, 00:04:08.932 { 00:04:08.932 "method": "sock_impl_set_options", 00:04:08.932 "params": { 00:04:08.932 "impl_name": "posix", 00:04:08.932 "recv_buf_size": 2097152, 00:04:08.932 "send_buf_size": 2097152, 00:04:08.932 "enable_recv_pipe": true, 00:04:08.932 "enable_quickack": false, 00:04:08.932 "enable_placement_id": 0, 00:04:08.932 "enable_zerocopy_send_server": true, 00:04:08.932 "enable_zerocopy_send_client": false, 00:04:08.932 "zerocopy_threshold": 0, 00:04:08.932 "tls_version": 0, 00:04:08.932 "enable_ktls": false 00:04:08.932 } 00:04:08.932 } 00:04:08.932 ] 00:04:08.932 }, 00:04:08.932 { 00:04:08.932 "subsystem": "vmd", 00:04:08.932 "config": [] 00:04:08.932 }, 00:04:08.932 { 00:04:08.932 "subsystem": "accel", 00:04:08.932 "config": [ 00:04:08.932 { 00:04:08.932 "method": "accel_set_options", 00:04:08.932 "params": { 00:04:08.932 "small_cache_size": 128, 00:04:08.932 "large_cache_size": 16, 00:04:08.932 "task_count": 2048, 00:04:08.932 "sequence_count": 2048, 00:04:08.932 "buf_count": 2048 00:04:08.932 } 00:04:08.932 } 00:04:08.932 ] 00:04:08.932 }, 00:04:08.932 { 00:04:08.932 "subsystem": "bdev", 00:04:08.932 "config": [ 00:04:08.932 { 00:04:08.932 "method": "bdev_set_options", 00:04:08.932 "params": { 00:04:08.932 "bdev_io_pool_size": 65535, 00:04:08.932 "bdev_io_cache_size": 256, 00:04:08.932 "bdev_auto_examine": true, 00:04:08.932 "iobuf_small_cache_size": 128, 00:04:08.932 "iobuf_large_cache_size": 16 00:04:08.932 } 00:04:08.932 }, 00:04:08.932 { 00:04:08.932 "method": "bdev_raid_set_options", 00:04:08.932 "params": { 00:04:08.932 "process_window_size_kb": 1024, 00:04:08.932 "process_max_bandwidth_mb_sec": 0 00:04:08.932 } 00:04:08.932 }, 00:04:08.932 { 00:04:08.932 "method": "bdev_iscsi_set_options", 00:04:08.932 "params": { 00:04:08.932 "timeout_sec": 30 00:04:08.932 } 00:04:08.932 }, 00:04:08.932 { 00:04:08.932 "method": "bdev_nvme_set_options", 00:04:08.932 "params": { 00:04:08.932 "action_on_timeout": "none", 00:04:08.932 "timeout_us": 0, 00:04:08.932 "timeout_admin_us": 0, 00:04:08.932 "keep_alive_timeout_ms": 10000, 00:04:08.932 "arbitration_burst": 0, 00:04:08.932 "low_priority_weight": 0, 00:04:08.932 "medium_priority_weight": 0, 00:04:08.932 "high_priority_weight": 0, 00:04:08.932 "nvme_adminq_poll_period_us": 10000, 00:04:08.932 "nvme_ioq_poll_period_us": 0, 00:04:08.932 "io_queue_requests": 0, 00:04:08.932 "delay_cmd_submit": true, 00:04:08.932 "transport_retry_count": 4, 00:04:08.932 "bdev_retry_count": 3, 00:04:08.932 "transport_ack_timeout": 0, 00:04:08.932 "ctrlr_loss_timeout_sec": 0, 00:04:08.932 "reconnect_delay_sec": 0, 00:04:08.932 "fast_io_fail_timeout_sec": 0, 00:04:08.932 "disable_auto_failback": false, 00:04:08.932 "generate_uuids": false, 00:04:08.932 "transport_tos": 0, 00:04:08.932 "nvme_error_stat": false, 00:04:08.932 "rdma_srq_size": 0, 00:04:08.932 "io_path_stat": false, 00:04:08.932 "allow_accel_sequence": false, 00:04:08.932 "rdma_max_cq_size": 0, 00:04:08.932 "rdma_cm_event_timeout_ms": 0, 00:04:08.932 "dhchap_digests": [ 00:04:08.932 "sha256", 00:04:08.932 "sha384", 00:04:08.932 "sha512" 00:04:08.932 ], 00:04:08.932 "dhchap_dhgroups": [ 00:04:08.932 "null", 00:04:08.932 "ffdhe2048", 00:04:08.932 "ffdhe3072", 00:04:08.932 "ffdhe4096", 00:04:08.932 "ffdhe6144", 00:04:08.932 "ffdhe8192" 00:04:08.932 ] 00:04:08.932 } 00:04:08.932 }, 00:04:08.932 { 00:04:08.932 "method": "bdev_nvme_set_hotplug", 00:04:08.932 "params": { 00:04:08.932 "period_us": 100000, 00:04:08.932 "enable": false 00:04:08.932 } 00:04:08.932 }, 00:04:08.932 { 00:04:08.932 "method": "bdev_wait_for_examine" 00:04:08.932 } 00:04:08.932 ] 00:04:08.932 }, 00:04:08.932 { 00:04:08.932 "subsystem": "scsi", 00:04:08.932 "config": null 00:04:08.932 }, 00:04:08.932 { 00:04:08.932 "subsystem": "scheduler", 00:04:08.932 "config": [ 00:04:08.932 { 00:04:08.932 "method": "framework_set_scheduler", 00:04:08.932 "params": { 00:04:08.932 "name": "static" 00:04:08.933 } 00:04:08.933 } 00:04:08.933 ] 00:04:08.933 }, 00:04:08.933 { 00:04:08.933 "subsystem": "vhost_scsi", 00:04:08.933 "config": [] 00:04:08.933 }, 00:04:08.933 { 00:04:08.933 "subsystem": "vhost_blk", 00:04:08.933 "config": [] 00:04:08.933 }, 00:04:08.933 { 00:04:08.933 "subsystem": "ublk", 00:04:08.933 "config": [] 00:04:08.933 }, 00:04:08.933 { 00:04:08.933 "subsystem": "nbd", 00:04:08.933 "config": [] 00:04:08.933 }, 00:04:08.933 { 00:04:08.933 "subsystem": "nvmf", 00:04:08.933 "config": [ 00:04:08.933 { 00:04:08.933 "method": "nvmf_set_config", 00:04:08.933 "params": { 00:04:08.933 "discovery_filter": "match_any", 00:04:08.933 "admin_cmd_passthru": { 00:04:08.933 "identify_ctrlr": false 00:04:08.933 }, 00:04:08.933 "dhchap_digests": [ 00:04:08.933 "sha256", 00:04:08.933 "sha384", 00:04:08.933 "sha512" 00:04:08.933 ], 00:04:08.933 "dhchap_dhgroups": [ 00:04:08.933 "null", 00:04:08.933 "ffdhe2048", 00:04:08.933 "ffdhe3072", 00:04:08.933 "ffdhe4096", 00:04:08.933 "ffdhe6144", 00:04:08.933 "ffdhe8192" 00:04:08.933 ] 00:04:08.933 } 00:04:08.933 }, 00:04:08.933 { 00:04:08.933 "method": "nvmf_set_max_subsystems", 00:04:08.933 "params": { 00:04:08.933 "max_subsystems": 1024 00:04:08.933 } 00:04:08.933 }, 00:04:08.933 { 00:04:08.933 "method": "nvmf_set_crdt", 00:04:08.933 "params": { 00:04:08.933 "crdt1": 0, 00:04:08.933 "crdt2": 0, 00:04:08.933 "crdt3": 0 00:04:08.933 } 00:04:08.933 }, 00:04:08.933 { 00:04:08.933 "method": "nvmf_create_transport", 00:04:08.933 "params": { 00:04:08.933 "trtype": "TCP", 00:04:08.933 "max_queue_depth": 128, 00:04:08.933 "max_io_qpairs_per_ctrlr": 127, 00:04:08.933 "in_capsule_data_size": 4096, 00:04:08.933 "max_io_size": 131072, 00:04:08.933 "io_unit_size": 131072, 00:04:08.933 "max_aq_depth": 128, 00:04:08.933 "num_shared_buffers": 511, 00:04:08.933 "buf_cache_size": 4294967295, 00:04:08.933 "dif_insert_or_strip": false, 00:04:08.933 "zcopy": false, 00:04:08.933 "c2h_success": true, 00:04:08.933 "sock_priority": 0, 00:04:08.933 "abort_timeout_sec": 1, 00:04:08.933 "ack_timeout": 0, 00:04:08.933 "data_wr_pool_size": 0 00:04:08.933 } 00:04:08.933 } 00:04:08.933 ] 00:04:08.933 }, 00:04:08.933 { 00:04:08.933 "subsystem": "iscsi", 00:04:08.933 "config": [ 00:04:08.933 { 00:04:08.933 "method": "iscsi_set_options", 00:04:08.933 "params": { 00:04:08.933 "node_base": "iqn.2016-06.io.spdk", 00:04:08.933 "max_sessions": 128, 00:04:08.933 "max_connections_per_session": 2, 00:04:08.933 "max_queue_depth": 64, 00:04:08.933 "default_time2wait": 2, 00:04:08.933 "default_time2retain": 20, 00:04:08.933 "first_burst_length": 8192, 00:04:08.933 "immediate_data": true, 00:04:08.933 "allow_duplicated_isid": false, 00:04:08.933 "error_recovery_level": 0, 00:04:08.933 "nop_timeout": 60, 00:04:08.933 "nop_in_interval": 30, 00:04:08.933 "disable_chap": false, 00:04:08.933 "require_chap": false, 00:04:08.933 "mutual_chap": false, 00:04:08.933 "chap_group": 0, 00:04:08.933 "max_large_datain_per_connection": 64, 00:04:08.933 "max_r2t_per_connection": 4, 00:04:08.933 "pdu_pool_size": 36864, 00:04:08.933 "immediate_data_pool_size": 16384, 00:04:08.933 "data_out_pool_size": 2048 00:04:08.933 } 00:04:08.933 } 00:04:08.933 ] 00:04:08.933 } 00:04:08.933 ] 00:04:08.933 } 00:04:08.933 13:16:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:08.933 13:16:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56915 00:04:08.933 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56915 ']' 00:04:08.933 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56915 00:04:08.933 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:08.933 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.933 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56915 00:04:08.933 killing process with pid 56915 00:04:08.933 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.933 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.933 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56915' 00:04:08.933 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56915 00:04:08.933 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56915 00:04:10.838 13:16:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56960 00:04:10.838 13:16:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:10.838 13:16:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.112 13:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56960 00:04:16.112 13:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56960 ']' 00:04:16.112 13:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56960 00:04:16.112 13:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:16.113 13:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.113 13:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56960 00:04:16.113 13:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.113 13:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.113 killing process with pid 56960 00:04:16.113 13:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56960' 00:04:16.113 13:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56960 00:04:16.113 13:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56960 00:04:17.490 13:17:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:17.490 13:17:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:17.490 ************************************ 00:04:17.490 END TEST skip_rpc_with_json 00:04:17.490 ************************************ 00:04:17.490 00:04:17.490 real 0m9.952s 00:04:17.490 user 0m9.415s 00:04:17.490 sys 0m0.910s 00:04:17.490 13:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.490 13:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.490 13:17:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:17.490 13:17:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.490 13:17:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.490 13:17:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.749 ************************************ 00:04:17.749 START TEST skip_rpc_with_delay 00:04:17.749 ************************************ 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.749 [2024-11-26 13:17:06.186594] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:17.749 ************************************ 00:04:17.749 END TEST skip_rpc_with_delay 00:04:17.749 ************************************ 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:17.749 00:04:17.749 real 0m0.201s 00:04:17.749 user 0m0.110s 00:04:17.749 sys 0m0.089s 00:04:17.749 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.750 13:17:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:17.750 13:17:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:17.750 13:17:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:17.750 13:17:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:17.750 13:17:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.750 13:17:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.750 13:17:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.750 ************************************ 00:04:17.750 START TEST exit_on_failed_rpc_init 00:04:17.750 ************************************ 00:04:17.750 13:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:17.750 13:17:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57088 00:04:17.750 13:17:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57088 00:04:17.750 13:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57088 ']' 00:04:17.750 13:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.750 13:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.750 13:17:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:17.750 13:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.750 13:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.750 13:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:18.009 [2024-11-26 13:17:06.438217] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:18.009 [2024-11-26 13:17:06.438430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57088 ] 00:04:18.268 [2024-11-26 13:17:06.617848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.268 [2024-11-26 13:17:06.716890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.205 13:17:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.205 13:17:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:19.205 13:17:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.205 13:17:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:19.205 13:17:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:19.205 13:17:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:19.205 13:17:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:19.205 13:17:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.205 13:17:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:19.205 13:17:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.205 13:17:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:19.205 13:17:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.205 13:17:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:19.205 13:17:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:19.205 13:17:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:19.205 [2024-11-26 13:17:07.615033] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:19.205 [2024-11-26 13:17:07.615431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57106 ] 00:04:19.480 [2024-11-26 13:17:07.795810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.480 [2024-11-26 13:17:07.940480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.480 [2024-11-26 13:17:07.940620] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:19.480 [2024-11-26 13:17:07.940647] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:19.480 [2024-11-26 13:17:07.940678] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57088 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57088 ']' 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57088 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57088 00:04:19.738 killing process with pid 57088 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57088' 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57088 00:04:19.738 13:17:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57088 00:04:21.638 ************************************ 00:04:21.638 END TEST exit_on_failed_rpc_init 00:04:21.638 ************************************ 00:04:21.638 00:04:21.638 real 0m3.642s 00:04:21.638 user 0m4.030s 00:04:21.638 sys 0m0.666s 00:04:21.638 13:17:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.638 13:17:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.638 13:17:09 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:21.638 00:04:21.638 real 0m21.011s 00:04:21.638 user 0m20.055s 00:04:21.639 sys 0m2.268s 00:04:21.639 13:17:09 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.639 ************************************ 00:04:21.639 END TEST skip_rpc 00:04:21.639 ************************************ 00:04:21.639 13:17:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.639 13:17:10 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:21.639 13:17:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.639 13:17:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.639 13:17:10 -- common/autotest_common.sh@10 -- # set +x 00:04:21.639 ************************************ 00:04:21.639 START TEST rpc_client 00:04:21.639 ************************************ 00:04:21.639 13:17:10 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:21.639 * Looking for test storage... 00:04:21.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:21.639 13:17:10 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.639 13:17:10 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.639 13:17:10 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.897 13:17:10 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.897 13:17:10 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:21.897 13:17:10 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.897 13:17:10 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.897 --rc genhtml_branch_coverage=1 00:04:21.897 --rc genhtml_function_coverage=1 00:04:21.897 --rc genhtml_legend=1 00:04:21.897 --rc geninfo_all_blocks=1 00:04:21.897 --rc geninfo_unexecuted_blocks=1 00:04:21.897 00:04:21.897 ' 00:04:21.897 13:17:10 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.897 --rc genhtml_branch_coverage=1 00:04:21.897 --rc genhtml_function_coverage=1 00:04:21.897 --rc genhtml_legend=1 00:04:21.897 --rc geninfo_all_blocks=1 00:04:21.897 --rc geninfo_unexecuted_blocks=1 00:04:21.897 00:04:21.897 ' 00:04:21.897 13:17:10 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.897 --rc genhtml_branch_coverage=1 00:04:21.897 --rc genhtml_function_coverage=1 00:04:21.897 --rc genhtml_legend=1 00:04:21.897 --rc geninfo_all_blocks=1 00:04:21.897 --rc geninfo_unexecuted_blocks=1 00:04:21.897 00:04:21.897 ' 00:04:21.897 13:17:10 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.897 --rc genhtml_branch_coverage=1 00:04:21.897 --rc genhtml_function_coverage=1 00:04:21.897 --rc genhtml_legend=1 00:04:21.897 --rc geninfo_all_blocks=1 00:04:21.897 --rc geninfo_unexecuted_blocks=1 00:04:21.897 00:04:21.897 ' 00:04:21.897 13:17:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:21.897 OK 00:04:21.897 13:17:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:21.897 00:04:21.897 real 0m0.249s 00:04:21.897 user 0m0.138s 00:04:21.897 sys 0m0.119s 00:04:21.897 13:17:10 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.897 13:17:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:21.897 ************************************ 00:04:21.897 END TEST rpc_client 00:04:21.897 ************************************ 00:04:21.897 13:17:10 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:21.897 13:17:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.897 13:17:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.897 13:17:10 -- common/autotest_common.sh@10 -- # set +x 00:04:21.897 ************************************ 00:04:21.897 START TEST json_config 00:04:21.897 ************************************ 00:04:21.897 13:17:10 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:21.897 13:17:10 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.897 13:17:10 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.897 13:17:10 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:22.156 13:17:10 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:22.156 13:17:10 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.156 13:17:10 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.156 13:17:10 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.156 13:17:10 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.156 13:17:10 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.156 13:17:10 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.156 13:17:10 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.156 13:17:10 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.156 13:17:10 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.156 13:17:10 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.156 13:17:10 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.156 13:17:10 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:22.156 13:17:10 json_config -- scripts/common.sh@345 -- # : 1 00:04:22.156 13:17:10 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.156 13:17:10 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.156 13:17:10 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:22.156 13:17:10 json_config -- scripts/common.sh@353 -- # local d=1 00:04:22.156 13:17:10 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.156 13:17:10 json_config -- scripts/common.sh@355 -- # echo 1 00:04:22.156 13:17:10 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.156 13:17:10 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:22.156 13:17:10 json_config -- scripts/common.sh@353 -- # local d=2 00:04:22.156 13:17:10 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.156 13:17:10 json_config -- scripts/common.sh@355 -- # echo 2 00:04:22.156 13:17:10 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.156 13:17:10 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.156 13:17:10 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.156 13:17:10 json_config -- scripts/common.sh@368 -- # return 0 00:04:22.156 13:17:10 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.156 13:17:10 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:22.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.156 --rc genhtml_branch_coverage=1 00:04:22.156 --rc genhtml_function_coverage=1 00:04:22.156 --rc genhtml_legend=1 00:04:22.156 --rc geninfo_all_blocks=1 00:04:22.156 --rc geninfo_unexecuted_blocks=1 00:04:22.156 00:04:22.156 ' 00:04:22.156 13:17:10 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:22.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.156 --rc genhtml_branch_coverage=1 00:04:22.156 --rc genhtml_function_coverage=1 00:04:22.156 --rc genhtml_legend=1 00:04:22.156 --rc geninfo_all_blocks=1 00:04:22.156 --rc geninfo_unexecuted_blocks=1 00:04:22.156 00:04:22.156 ' 00:04:22.156 13:17:10 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:22.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.156 --rc genhtml_branch_coverage=1 00:04:22.156 --rc genhtml_function_coverage=1 00:04:22.156 --rc genhtml_legend=1 00:04:22.156 --rc geninfo_all_blocks=1 00:04:22.156 --rc geninfo_unexecuted_blocks=1 00:04:22.156 00:04:22.156 ' 00:04:22.156 13:17:10 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:22.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.156 --rc genhtml_branch_coverage=1 00:04:22.156 --rc genhtml_function_coverage=1 00:04:22.156 --rc genhtml_legend=1 00:04:22.156 --rc geninfo_all_blocks=1 00:04:22.156 --rc geninfo_unexecuted_blocks=1 00:04:22.156 00:04:22.156 ' 00:04:22.156 13:17:10 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:22.156 13:17:10 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:22.156 13:17:10 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.156 13:17:10 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.156 13:17:10 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.156 13:17:10 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.156 13:17:10 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.156 13:17:10 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.156 13:17:10 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.156 13:17:10 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.156 13:17:10 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.156 13:17:10 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:133dc7ed-3b82-427d-81c6-87c2a8a96ca8 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=133dc7ed-3b82-427d-81c6-87c2a8a96ca8 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:22.157 13:17:10 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:22.157 13:17:10 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.157 13:17:10 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.157 13:17:10 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.157 13:17:10 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.157 13:17:10 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.157 13:17:10 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.157 13:17:10 json_config -- paths/export.sh@5 -- # export PATH 00:04:22.157 13:17:10 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@51 -- # : 0 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:22.157 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:22.157 13:17:10 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:22.157 WARNING: No tests are enabled so not running JSON configuration tests 00:04:22.157 13:17:10 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:22.157 13:17:10 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:22.157 13:17:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:22.157 13:17:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:22.157 13:17:10 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:22.157 13:17:10 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:22.157 13:17:10 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:22.157 ************************************ 00:04:22.157 END TEST json_config 00:04:22.157 ************************************ 00:04:22.157 00:04:22.157 real 0m0.188s 00:04:22.157 user 0m0.123s 00:04:22.157 sys 0m0.069s 00:04:22.157 13:17:10 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.157 13:17:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.157 13:17:10 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:22.157 13:17:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.157 13:17:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.157 13:17:10 -- common/autotest_common.sh@10 -- # set +x 00:04:22.157 ************************************ 00:04:22.157 START TEST json_config_extra_key 00:04:22.157 ************************************ 00:04:22.157 13:17:10 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:22.157 13:17:10 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:22.157 13:17:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:22.157 13:17:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:22.416 13:17:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.416 13:17:10 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.417 13:17:10 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.417 13:17:10 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:22.417 13:17:10 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.417 13:17:10 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:22.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.417 --rc genhtml_branch_coverage=1 00:04:22.417 --rc genhtml_function_coverage=1 00:04:22.417 --rc genhtml_legend=1 00:04:22.417 --rc geninfo_all_blocks=1 00:04:22.417 --rc geninfo_unexecuted_blocks=1 00:04:22.417 00:04:22.417 ' 00:04:22.417 13:17:10 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:22.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.417 --rc genhtml_branch_coverage=1 00:04:22.417 --rc genhtml_function_coverage=1 00:04:22.417 --rc genhtml_legend=1 00:04:22.417 --rc geninfo_all_blocks=1 00:04:22.417 --rc geninfo_unexecuted_blocks=1 00:04:22.417 00:04:22.417 ' 00:04:22.417 13:17:10 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:22.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.417 --rc genhtml_branch_coverage=1 00:04:22.417 --rc genhtml_function_coverage=1 00:04:22.417 --rc genhtml_legend=1 00:04:22.417 --rc geninfo_all_blocks=1 00:04:22.417 --rc geninfo_unexecuted_blocks=1 00:04:22.417 00:04:22.417 ' 00:04:22.417 13:17:10 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:22.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.417 --rc genhtml_branch_coverage=1 00:04:22.417 --rc genhtml_function_coverage=1 00:04:22.417 --rc genhtml_legend=1 00:04:22.417 --rc geninfo_all_blocks=1 00:04:22.417 --rc geninfo_unexecuted_blocks=1 00:04:22.417 00:04:22.417 ' 00:04:22.417 13:17:10 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:133dc7ed-3b82-427d-81c6-87c2a8a96ca8 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=133dc7ed-3b82-427d-81c6-87c2a8a96ca8 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:22.417 13:17:10 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:22.417 13:17:10 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.417 13:17:10 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.417 13:17:10 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.417 13:17:10 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.417 13:17:10 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.417 13:17:10 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.417 13:17:10 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:22.417 13:17:10 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:22.417 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:22.417 13:17:10 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:22.417 13:17:10 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:22.417 13:17:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:22.417 13:17:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:22.417 13:17:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:22.417 13:17:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:22.417 13:17:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:22.417 13:17:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:22.417 13:17:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:22.417 INFO: launching applications... 00:04:22.417 Waiting for target to run... 00:04:22.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.417 13:17:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:22.417 13:17:10 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:22.417 13:17:10 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:22.417 13:17:10 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:22.417 13:17:10 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:22.417 13:17:10 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:22.417 13:17:10 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:22.417 13:17:10 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:22.417 13:17:10 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:22.417 13:17:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.417 13:17:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.417 13:17:10 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57311 00:04:22.417 13:17:10 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:22.417 13:17:10 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57311 /var/tmp/spdk_tgt.sock 00:04:22.417 13:17:10 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57311 ']' 00:04:22.417 13:17:10 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:22.417 13:17:10 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.417 13:17:10 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.417 13:17:10 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.417 13:17:10 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.417 13:17:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:22.417 [2024-11-26 13:17:10.950133] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:22.417 [2024-11-26 13:17:10.950646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57311 ] 00:04:23.031 [2024-11-26 13:17:11.399121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.031 [2024-11-26 13:17:11.495700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.600 13:17:12 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.600 13:17:12 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:23.600 13:17:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:23.600 00:04:23.600 13:17:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:23.600 INFO: shutting down applications... 00:04:23.600 13:17:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:23.600 13:17:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:23.600 13:17:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:23.600 13:17:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57311 ]] 00:04:23.600 13:17:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57311 00:04:23.600 13:17:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:23.600 13:17:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.600 13:17:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57311 00:04:23.600 13:17:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.171 13:17:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.171 13:17:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.171 13:17:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57311 00:04:24.171 13:17:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.737 13:17:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.737 13:17:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.737 13:17:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57311 00:04:24.737 13:17:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:25.303 13:17:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:25.303 13:17:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.303 13:17:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57311 00:04:25.303 13:17:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:25.560 13:17:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:25.560 13:17:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.560 13:17:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57311 00:04:25.560 13:17:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:26.127 13:17:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:26.127 13:17:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.127 13:17:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57311 00:04:26.127 13:17:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:26.127 13:17:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:26.127 13:17:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:26.127 SPDK target shutdown done 00:04:26.127 13:17:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:26.127 Success 00:04:26.127 13:17:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:26.127 ************************************ 00:04:26.127 END TEST json_config_extra_key 00:04:26.127 ************************************ 00:04:26.127 00:04:26.127 real 0m3.993s 00:04:26.127 user 0m3.358s 00:04:26.127 sys 0m0.606s 00:04:26.127 13:17:14 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.127 13:17:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:26.127 13:17:14 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:26.127 13:17:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.127 13:17:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.127 13:17:14 -- common/autotest_common.sh@10 -- # set +x 00:04:26.127 ************************************ 00:04:26.127 START TEST alias_rpc 00:04:26.127 ************************************ 00:04:26.127 13:17:14 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:26.386 * Looking for test storage... 00:04:26.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:26.386 13:17:14 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.386 13:17:14 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.386 13:17:14 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.386 13:17:14 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.386 13:17:14 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:26.386 13:17:14 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.386 13:17:14 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.386 --rc genhtml_branch_coverage=1 00:04:26.386 --rc genhtml_function_coverage=1 00:04:26.386 --rc genhtml_legend=1 00:04:26.386 --rc geninfo_all_blocks=1 00:04:26.386 --rc geninfo_unexecuted_blocks=1 00:04:26.386 00:04:26.386 ' 00:04:26.386 13:17:14 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.386 --rc genhtml_branch_coverage=1 00:04:26.386 --rc genhtml_function_coverage=1 00:04:26.386 --rc genhtml_legend=1 00:04:26.386 --rc geninfo_all_blocks=1 00:04:26.386 --rc geninfo_unexecuted_blocks=1 00:04:26.386 00:04:26.386 ' 00:04:26.386 13:17:14 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.386 --rc genhtml_branch_coverage=1 00:04:26.386 --rc genhtml_function_coverage=1 00:04:26.386 --rc genhtml_legend=1 00:04:26.386 --rc geninfo_all_blocks=1 00:04:26.386 --rc geninfo_unexecuted_blocks=1 00:04:26.386 00:04:26.386 ' 00:04:26.386 13:17:14 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.386 --rc genhtml_branch_coverage=1 00:04:26.386 --rc genhtml_function_coverage=1 00:04:26.386 --rc genhtml_legend=1 00:04:26.386 --rc geninfo_all_blocks=1 00:04:26.386 --rc geninfo_unexecuted_blocks=1 00:04:26.386 00:04:26.386 ' 00:04:26.386 13:17:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:26.386 13:17:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.386 13:17:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57416 00:04:26.386 13:17:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57416 00:04:26.386 13:17:14 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57416 ']' 00:04:26.386 13:17:14 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.386 13:17:14 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.386 13:17:14 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.386 13:17:14 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.386 13:17:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.386 [2024-11-26 13:17:14.939307] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:26.387 [2024-11-26 13:17:14.939505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57416 ] 00:04:26.645 [2024-11-26 13:17:15.120185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.904 [2024-11-26 13:17:15.218723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.470 13:17:15 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.470 13:17:15 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:27.470 13:17:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:27.729 13:17:16 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57416 00:04:27.729 13:17:16 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57416 ']' 00:04:27.729 13:17:16 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57416 00:04:27.729 13:17:16 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:27.729 13:17:16 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.729 13:17:16 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57416 00:04:27.729 killing process with pid 57416 00:04:27.729 13:17:16 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.729 13:17:16 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.729 13:17:16 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57416' 00:04:27.729 13:17:16 alias_rpc -- common/autotest_common.sh@973 -- # kill 57416 00:04:27.729 13:17:16 alias_rpc -- common/autotest_common.sh@978 -- # wait 57416 00:04:29.632 ************************************ 00:04:29.632 END TEST alias_rpc 00:04:29.632 ************************************ 00:04:29.632 00:04:29.632 real 0m3.377s 00:04:29.632 user 0m3.451s 00:04:29.632 sys 0m0.585s 00:04:29.632 13:17:17 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.632 13:17:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.632 13:17:18 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:29.632 13:17:18 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:29.632 13:17:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.632 13:17:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.632 13:17:18 -- common/autotest_common.sh@10 -- # set +x 00:04:29.632 ************************************ 00:04:29.632 START TEST spdkcli_tcp 00:04:29.632 ************************************ 00:04:29.632 13:17:18 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:29.632 * Looking for test storage... 00:04:29.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:29.632 13:17:18 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.632 13:17:18 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.632 13:17:18 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.891 13:17:18 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.891 13:17:18 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:29.891 13:17:18 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.891 13:17:18 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.891 --rc genhtml_branch_coverage=1 00:04:29.891 --rc genhtml_function_coverage=1 00:04:29.891 --rc genhtml_legend=1 00:04:29.891 --rc geninfo_all_blocks=1 00:04:29.891 --rc geninfo_unexecuted_blocks=1 00:04:29.891 00:04:29.891 ' 00:04:29.891 13:17:18 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.891 --rc genhtml_branch_coverage=1 00:04:29.891 --rc genhtml_function_coverage=1 00:04:29.891 --rc genhtml_legend=1 00:04:29.891 --rc geninfo_all_blocks=1 00:04:29.891 --rc geninfo_unexecuted_blocks=1 00:04:29.891 00:04:29.891 ' 00:04:29.891 13:17:18 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.891 --rc genhtml_branch_coverage=1 00:04:29.891 --rc genhtml_function_coverage=1 00:04:29.891 --rc genhtml_legend=1 00:04:29.891 --rc geninfo_all_blocks=1 00:04:29.891 --rc geninfo_unexecuted_blocks=1 00:04:29.891 00:04:29.891 ' 00:04:29.891 13:17:18 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.891 --rc genhtml_branch_coverage=1 00:04:29.891 --rc genhtml_function_coverage=1 00:04:29.891 --rc genhtml_legend=1 00:04:29.891 --rc geninfo_all_blocks=1 00:04:29.891 --rc geninfo_unexecuted_blocks=1 00:04:29.891 00:04:29.891 ' 00:04:29.891 13:17:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:29.891 13:17:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:29.891 13:17:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:29.891 13:17:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:29.891 13:17:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:29.891 13:17:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:29.891 13:17:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:29.891 13:17:18 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.891 13:17:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:29.891 13:17:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57512 00:04:29.891 13:17:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:29.891 13:17:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57512 00:04:29.891 13:17:18 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57512 ']' 00:04:29.891 13:17:18 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.891 13:17:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.891 13:17:18 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.891 13:17:18 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.891 13:17:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:29.891 [2024-11-26 13:17:18.346631] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:29.891 [2024-11-26 13:17:18.347124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57512 ] 00:04:30.151 [2024-11-26 13:17:18.528372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.151 [2024-11-26 13:17:18.630009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.151 [2024-11-26 13:17:18.630024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.155 13:17:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.155 13:17:19 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:31.155 13:17:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57529 00:04:31.155 13:17:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:31.155 13:17:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:31.155 [ 00:04:31.155 "bdev_malloc_delete", 00:04:31.155 "bdev_malloc_create", 00:04:31.155 "bdev_null_resize", 00:04:31.155 "bdev_null_delete", 00:04:31.155 "bdev_null_create", 00:04:31.155 "bdev_nvme_cuse_unregister", 00:04:31.155 "bdev_nvme_cuse_register", 00:04:31.155 "bdev_opal_new_user", 00:04:31.155 "bdev_opal_set_lock_state", 00:04:31.155 "bdev_opal_delete", 00:04:31.156 "bdev_opal_get_info", 00:04:31.156 "bdev_opal_create", 00:04:31.156 "bdev_nvme_opal_revert", 00:04:31.156 "bdev_nvme_opal_init", 00:04:31.156 "bdev_nvme_send_cmd", 00:04:31.156 "bdev_nvme_set_keys", 00:04:31.156 "bdev_nvme_get_path_iostat", 00:04:31.156 "bdev_nvme_get_mdns_discovery_info", 00:04:31.156 "bdev_nvme_stop_mdns_discovery", 00:04:31.156 "bdev_nvme_start_mdns_discovery", 00:04:31.156 "bdev_nvme_set_multipath_policy", 00:04:31.156 "bdev_nvme_set_preferred_path", 00:04:31.156 "bdev_nvme_get_io_paths", 00:04:31.156 "bdev_nvme_remove_error_injection", 00:04:31.156 "bdev_nvme_add_error_injection", 00:04:31.156 "bdev_nvme_get_discovery_info", 00:04:31.156 "bdev_nvme_stop_discovery", 00:04:31.156 "bdev_nvme_start_discovery", 00:04:31.156 "bdev_nvme_get_controller_health_info", 00:04:31.156 "bdev_nvme_disable_controller", 00:04:31.156 "bdev_nvme_enable_controller", 00:04:31.156 "bdev_nvme_reset_controller", 00:04:31.156 "bdev_nvme_get_transport_statistics", 00:04:31.156 "bdev_nvme_apply_firmware", 00:04:31.156 "bdev_nvme_detach_controller", 00:04:31.156 "bdev_nvme_get_controllers", 00:04:31.156 "bdev_nvme_attach_controller", 00:04:31.156 "bdev_nvme_set_hotplug", 00:04:31.156 "bdev_nvme_set_options", 00:04:31.156 "bdev_passthru_delete", 00:04:31.156 "bdev_passthru_create", 00:04:31.156 "bdev_lvol_set_parent_bdev", 00:04:31.156 "bdev_lvol_set_parent", 00:04:31.156 "bdev_lvol_check_shallow_copy", 00:04:31.156 "bdev_lvol_start_shallow_copy", 00:04:31.156 "bdev_lvol_grow_lvstore", 00:04:31.156 "bdev_lvol_get_lvols", 00:04:31.156 "bdev_lvol_get_lvstores", 00:04:31.156 "bdev_lvol_delete", 00:04:31.156 "bdev_lvol_set_read_only", 00:04:31.156 "bdev_lvol_resize", 00:04:31.156 "bdev_lvol_decouple_parent", 00:04:31.156 "bdev_lvol_inflate", 00:04:31.156 "bdev_lvol_rename", 00:04:31.156 "bdev_lvol_clone_bdev", 00:04:31.156 "bdev_lvol_clone", 00:04:31.156 "bdev_lvol_snapshot", 00:04:31.156 "bdev_lvol_create", 00:04:31.156 "bdev_lvol_delete_lvstore", 00:04:31.156 "bdev_lvol_rename_lvstore", 00:04:31.156 "bdev_lvol_create_lvstore", 00:04:31.156 "bdev_raid_set_options", 00:04:31.156 "bdev_raid_remove_base_bdev", 00:04:31.156 "bdev_raid_add_base_bdev", 00:04:31.156 "bdev_raid_delete", 00:04:31.156 "bdev_raid_create", 00:04:31.156 "bdev_raid_get_bdevs", 00:04:31.156 "bdev_error_inject_error", 00:04:31.156 "bdev_error_delete", 00:04:31.156 "bdev_error_create", 00:04:31.156 "bdev_split_delete", 00:04:31.156 "bdev_split_create", 00:04:31.156 "bdev_delay_delete", 00:04:31.156 "bdev_delay_create", 00:04:31.156 "bdev_delay_update_latency", 00:04:31.156 "bdev_zone_block_delete", 00:04:31.156 "bdev_zone_block_create", 00:04:31.156 "blobfs_create", 00:04:31.156 "blobfs_detect", 00:04:31.156 "blobfs_set_cache_size", 00:04:31.156 "bdev_aio_delete", 00:04:31.156 "bdev_aio_rescan", 00:04:31.156 "bdev_aio_create", 00:04:31.156 "bdev_ftl_set_property", 00:04:31.156 "bdev_ftl_get_properties", 00:04:31.156 "bdev_ftl_get_stats", 00:04:31.156 "bdev_ftl_unmap", 00:04:31.156 "bdev_ftl_unload", 00:04:31.156 "bdev_ftl_delete", 00:04:31.156 "bdev_ftl_load", 00:04:31.156 "bdev_ftl_create", 00:04:31.156 "bdev_virtio_attach_controller", 00:04:31.156 "bdev_virtio_scsi_get_devices", 00:04:31.156 "bdev_virtio_detach_controller", 00:04:31.156 "bdev_virtio_blk_set_hotplug", 00:04:31.156 "bdev_iscsi_delete", 00:04:31.156 "bdev_iscsi_create", 00:04:31.156 "bdev_iscsi_set_options", 00:04:31.156 "accel_error_inject_error", 00:04:31.156 "ioat_scan_accel_module", 00:04:31.156 "dsa_scan_accel_module", 00:04:31.156 "iaa_scan_accel_module", 00:04:31.156 "keyring_file_remove_key", 00:04:31.156 "keyring_file_add_key", 00:04:31.156 "keyring_linux_set_options", 00:04:31.156 "fsdev_aio_delete", 00:04:31.156 "fsdev_aio_create", 00:04:31.156 "iscsi_get_histogram", 00:04:31.156 "iscsi_enable_histogram", 00:04:31.156 "iscsi_set_options", 00:04:31.156 "iscsi_get_auth_groups", 00:04:31.156 "iscsi_auth_group_remove_secret", 00:04:31.156 "iscsi_auth_group_add_secret", 00:04:31.156 "iscsi_delete_auth_group", 00:04:31.156 "iscsi_create_auth_group", 00:04:31.156 "iscsi_set_discovery_auth", 00:04:31.156 "iscsi_get_options", 00:04:31.156 "iscsi_target_node_request_logout", 00:04:31.156 "iscsi_target_node_set_redirect", 00:04:31.156 "iscsi_target_node_set_auth", 00:04:31.156 "iscsi_target_node_add_lun", 00:04:31.156 "iscsi_get_stats", 00:04:31.156 "iscsi_get_connections", 00:04:31.156 "iscsi_portal_group_set_auth", 00:04:31.156 "iscsi_start_portal_group", 00:04:31.156 "iscsi_delete_portal_group", 00:04:31.156 "iscsi_create_portal_group", 00:04:31.156 "iscsi_get_portal_groups", 00:04:31.156 "iscsi_delete_target_node", 00:04:31.156 "iscsi_target_node_remove_pg_ig_maps", 00:04:31.156 "iscsi_target_node_add_pg_ig_maps", 00:04:31.156 "iscsi_create_target_node", 00:04:31.156 "iscsi_get_target_nodes", 00:04:31.156 "iscsi_delete_initiator_group", 00:04:31.156 "iscsi_initiator_group_remove_initiators", 00:04:31.156 "iscsi_initiator_group_add_initiators", 00:04:31.156 "iscsi_create_initiator_group", 00:04:31.156 "iscsi_get_initiator_groups", 00:04:31.156 "nvmf_set_crdt", 00:04:31.156 "nvmf_set_config", 00:04:31.156 "nvmf_set_max_subsystems", 00:04:31.156 "nvmf_stop_mdns_prr", 00:04:31.156 "nvmf_publish_mdns_prr", 00:04:31.156 "nvmf_subsystem_get_listeners", 00:04:31.156 "nvmf_subsystem_get_qpairs", 00:04:31.156 "nvmf_subsystem_get_controllers", 00:04:31.156 "nvmf_get_stats", 00:04:31.156 "nvmf_get_transports", 00:04:31.156 "nvmf_create_transport", 00:04:31.156 "nvmf_get_targets", 00:04:31.156 "nvmf_delete_target", 00:04:31.156 "nvmf_create_target", 00:04:31.156 "nvmf_subsystem_allow_any_host", 00:04:31.156 "nvmf_subsystem_set_keys", 00:04:31.156 "nvmf_subsystem_remove_host", 00:04:31.156 "nvmf_subsystem_add_host", 00:04:31.156 "nvmf_ns_remove_host", 00:04:31.156 "nvmf_ns_add_host", 00:04:31.156 "nvmf_subsystem_remove_ns", 00:04:31.156 "nvmf_subsystem_set_ns_ana_group", 00:04:31.156 "nvmf_subsystem_add_ns", 00:04:31.156 "nvmf_subsystem_listener_set_ana_state", 00:04:31.156 "nvmf_discovery_get_referrals", 00:04:31.156 "nvmf_discovery_remove_referral", 00:04:31.156 "nvmf_discovery_add_referral", 00:04:31.156 "nvmf_subsystem_remove_listener", 00:04:31.156 "nvmf_subsystem_add_listener", 00:04:31.156 "nvmf_delete_subsystem", 00:04:31.156 "nvmf_create_subsystem", 00:04:31.156 "nvmf_get_subsystems", 00:04:31.156 "env_dpdk_get_mem_stats", 00:04:31.156 "nbd_get_disks", 00:04:31.156 "nbd_stop_disk", 00:04:31.156 "nbd_start_disk", 00:04:31.156 "ublk_recover_disk", 00:04:31.156 "ublk_get_disks", 00:04:31.156 "ublk_stop_disk", 00:04:31.156 "ublk_start_disk", 00:04:31.156 "ublk_destroy_target", 00:04:31.156 "ublk_create_target", 00:04:31.156 "virtio_blk_create_transport", 00:04:31.156 "virtio_blk_get_transports", 00:04:31.156 "vhost_controller_set_coalescing", 00:04:31.156 "vhost_get_controllers", 00:04:31.156 "vhost_delete_controller", 00:04:31.156 "vhost_create_blk_controller", 00:04:31.156 "vhost_scsi_controller_remove_target", 00:04:31.156 "vhost_scsi_controller_add_target", 00:04:31.156 "vhost_start_scsi_controller", 00:04:31.156 "vhost_create_scsi_controller", 00:04:31.156 "thread_set_cpumask", 00:04:31.156 "scheduler_set_options", 00:04:31.156 "framework_get_governor", 00:04:31.156 "framework_get_scheduler", 00:04:31.156 "framework_set_scheduler", 00:04:31.156 "framework_get_reactors", 00:04:31.156 "thread_get_io_channels", 00:04:31.156 "thread_get_pollers", 00:04:31.156 "thread_get_stats", 00:04:31.156 "framework_monitor_context_switch", 00:04:31.156 "spdk_kill_instance", 00:04:31.156 "log_enable_timestamps", 00:04:31.156 "log_get_flags", 00:04:31.156 "log_clear_flag", 00:04:31.156 "log_set_flag", 00:04:31.156 "log_get_level", 00:04:31.156 "log_set_level", 00:04:31.156 "log_get_print_level", 00:04:31.156 "log_set_print_level", 00:04:31.156 "framework_enable_cpumask_locks", 00:04:31.156 "framework_disable_cpumask_locks", 00:04:31.156 "framework_wait_init", 00:04:31.156 "framework_start_init", 00:04:31.156 "scsi_get_devices", 00:04:31.156 "bdev_get_histogram", 00:04:31.156 "bdev_enable_histogram", 00:04:31.156 "bdev_set_qos_limit", 00:04:31.156 "bdev_set_qd_sampling_period", 00:04:31.156 "bdev_get_bdevs", 00:04:31.156 "bdev_reset_iostat", 00:04:31.156 "bdev_get_iostat", 00:04:31.156 "bdev_examine", 00:04:31.156 "bdev_wait_for_examine", 00:04:31.156 "bdev_set_options", 00:04:31.156 "accel_get_stats", 00:04:31.156 "accel_set_options", 00:04:31.156 "accel_set_driver", 00:04:31.156 "accel_crypto_key_destroy", 00:04:31.156 "accel_crypto_keys_get", 00:04:31.156 "accel_crypto_key_create", 00:04:31.156 "accel_assign_opc", 00:04:31.156 "accel_get_module_info", 00:04:31.156 "accel_get_opc_assignments", 00:04:31.156 "vmd_rescan", 00:04:31.156 "vmd_remove_device", 00:04:31.156 "vmd_enable", 00:04:31.156 "sock_get_default_impl", 00:04:31.156 "sock_set_default_impl", 00:04:31.156 "sock_impl_set_options", 00:04:31.156 "sock_impl_get_options", 00:04:31.156 "iobuf_get_stats", 00:04:31.156 "iobuf_set_options", 00:04:31.156 "keyring_get_keys", 00:04:31.156 "framework_get_pci_devices", 00:04:31.156 "framework_get_config", 00:04:31.156 "framework_get_subsystems", 00:04:31.156 "fsdev_set_opts", 00:04:31.156 "fsdev_get_opts", 00:04:31.156 "trace_get_info", 00:04:31.156 "trace_get_tpoint_group_mask", 00:04:31.157 "trace_disable_tpoint_group", 00:04:31.157 "trace_enable_tpoint_group", 00:04:31.157 "trace_clear_tpoint_mask", 00:04:31.157 "trace_set_tpoint_mask", 00:04:31.157 "notify_get_notifications", 00:04:31.157 "notify_get_types", 00:04:31.157 "spdk_get_version", 00:04:31.157 "rpc_get_methods" 00:04:31.157 ] 00:04:31.157 13:17:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:31.157 13:17:19 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.157 13:17:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.157 13:17:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:31.157 13:17:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57512 00:04:31.157 13:17:19 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57512 ']' 00:04:31.157 13:17:19 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57512 00:04:31.157 13:17:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:31.157 13:17:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.157 13:17:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57512 00:04:31.157 killing process with pid 57512 00:04:31.157 13:17:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.157 13:17:19 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.157 13:17:19 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57512' 00:04:31.157 13:17:19 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57512 00:04:31.157 13:17:19 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57512 00:04:33.061 ************************************ 00:04:33.061 END TEST spdkcli_tcp 00:04:33.061 ************************************ 00:04:33.061 00:04:33.061 real 0m3.453s 00:04:33.061 user 0m6.180s 00:04:33.061 sys 0m0.636s 00:04:33.061 13:17:21 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.061 13:17:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.061 13:17:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:33.061 13:17:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.061 13:17:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.061 13:17:21 -- common/autotest_common.sh@10 -- # set +x 00:04:33.061 ************************************ 00:04:33.061 START TEST dpdk_mem_utility 00:04:33.061 ************************************ 00:04:33.061 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:33.320 * Looking for test storage... 00:04:33.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:33.320 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:33.320 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:33.320 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:33.320 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:33.320 13:17:21 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.320 13:17:21 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.320 13:17:21 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.320 13:17:21 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.321 13:17:21 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:33.321 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.321 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:33.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.321 --rc genhtml_branch_coverage=1 00:04:33.321 --rc genhtml_function_coverage=1 00:04:33.321 --rc genhtml_legend=1 00:04:33.321 --rc geninfo_all_blocks=1 00:04:33.321 --rc geninfo_unexecuted_blocks=1 00:04:33.321 00:04:33.321 ' 00:04:33.321 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:33.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.321 --rc genhtml_branch_coverage=1 00:04:33.321 --rc genhtml_function_coverage=1 00:04:33.321 --rc genhtml_legend=1 00:04:33.321 --rc geninfo_all_blocks=1 00:04:33.321 --rc geninfo_unexecuted_blocks=1 00:04:33.321 00:04:33.321 ' 00:04:33.321 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:33.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.321 --rc genhtml_branch_coverage=1 00:04:33.321 --rc genhtml_function_coverage=1 00:04:33.321 --rc genhtml_legend=1 00:04:33.321 --rc geninfo_all_blocks=1 00:04:33.321 --rc geninfo_unexecuted_blocks=1 00:04:33.321 00:04:33.321 ' 00:04:33.321 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:33.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.321 --rc genhtml_branch_coverage=1 00:04:33.321 --rc genhtml_function_coverage=1 00:04:33.321 --rc genhtml_legend=1 00:04:33.321 --rc geninfo_all_blocks=1 00:04:33.321 --rc geninfo_unexecuted_blocks=1 00:04:33.321 00:04:33.321 ' 00:04:33.321 13:17:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:33.321 13:17:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57623 00:04:33.321 13:17:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.321 13:17:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57623 00:04:33.321 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57623 ']' 00:04:33.321 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.321 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.321 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.321 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.321 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:33.580 [2024-11-26 13:17:21.890972] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:33.580 [2024-11-26 13:17:21.891356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57623 ] 00:04:33.580 [2024-11-26 13:17:22.076502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.839 [2024-11-26 13:17:22.176659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.408 13:17:22 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.408 13:17:22 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:34.408 13:17:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:34.408 13:17:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:34.408 13:17:22 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.408 13:17:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:34.408 { 00:04:34.408 "filename": "/tmp/spdk_mem_dump.txt" 00:04:34.408 } 00:04:34.408 13:17:22 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.408 13:17:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:34.408 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:34.408 1 heaps totaling size 816.000000 MiB 00:04:34.408 size: 816.000000 MiB heap id: 0 00:04:34.408 end heaps---------- 00:04:34.408 9 mempools totaling size 595.772034 MiB 00:04:34.408 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:34.408 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:34.408 size: 92.545471 MiB name: bdev_io_57623 00:04:34.408 size: 50.003479 MiB name: msgpool_57623 00:04:34.408 size: 36.509338 MiB name: fsdev_io_57623 00:04:34.408 size: 21.763794 MiB name: PDU_Pool 00:04:34.408 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:34.408 size: 4.133484 MiB name: evtpool_57623 00:04:34.408 size: 0.026123 MiB name: Session_Pool 00:04:34.408 end mempools------- 00:04:34.408 6 memzones totaling size 4.142822 MiB 00:04:34.408 size: 1.000366 MiB name: RG_ring_0_57623 00:04:34.408 size: 1.000366 MiB name: RG_ring_1_57623 00:04:34.408 size: 1.000366 MiB name: RG_ring_4_57623 00:04:34.408 size: 1.000366 MiB name: RG_ring_5_57623 00:04:34.408 size: 0.125366 MiB name: RG_ring_2_57623 00:04:34.408 size: 0.015991 MiB name: RG_ring_3_57623 00:04:34.408 end memzones------- 00:04:34.408 13:17:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:34.699 heap id: 0 total size: 816.000000 MiB number of busy elements: 313 number of free elements: 18 00:04:34.699 list of free elements. size: 16.791870 MiB 00:04:34.699 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:34.699 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:34.699 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:34.699 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:34.699 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:34.699 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:34.699 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:34.699 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:34.699 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:34.699 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:34.699 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:34.699 element at address: 0x20001ac00000 with size: 0.562195 MiB 00:04:34.699 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:34.699 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:34.699 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:34.699 element at address: 0x200012c00000 with size: 0.443481 MiB 00:04:34.699 element at address: 0x200028000000 with size: 0.390442 MiB 00:04:34.699 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:34.699 list of standard malloc elements. size: 199.287231 MiB 00:04:34.699 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:34.699 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:34.699 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:34.699 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:34.699 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:34.699 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:34.699 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:34.699 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:34.699 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:34.699 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:34.699 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:34.699 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:34.699 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:34.700 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:34.700 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:34.700 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:34.700 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:34.701 element at address: 0x200028063f40 with size: 0.000244 MiB 00:04:34.701 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806af80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:34.701 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:34.701 list of memzone associated elements. size: 599.920898 MiB 00:04:34.701 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:34.701 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:34.701 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:34.701 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:34.701 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:34.701 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57623_0 00:04:34.701 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:34.701 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57623_0 00:04:34.701 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:34.701 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57623_0 00:04:34.701 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:34.701 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:34.701 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:34.701 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:34.701 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:34.701 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57623_0 00:04:34.701 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:34.701 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57623 00:04:34.701 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:34.701 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57623 00:04:34.701 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:34.701 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:34.701 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:34.701 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:34.701 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:34.701 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:34.701 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:34.701 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:34.701 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:34.701 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57623 00:04:34.701 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:34.701 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57623 00:04:34.701 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:34.701 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57623 00:04:34.701 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:34.701 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57623 00:04:34.701 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:34.701 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57623 00:04:34.701 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:34.701 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57623 00:04:34.701 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:34.701 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:34.701 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:34.701 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:34.701 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:34.701 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:34.701 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:34.701 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57623 00:04:34.701 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:34.701 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57623 00:04:34.701 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:34.701 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:34.702 element at address: 0x200028064140 with size: 0.023804 MiB 00:04:34.702 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:34.702 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:34.702 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57623 00:04:34.702 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:04:34.702 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:34.702 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:34.702 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57623 00:04:34.702 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:34.702 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57623 00:04:34.702 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:34.702 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57623 00:04:34.702 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:04:34.702 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:34.702 13:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:34.702 13:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57623 00:04:34.702 13:17:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57623 ']' 00:04:34.702 13:17:23 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57623 00:04:34.702 13:17:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:34.702 13:17:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.702 13:17:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57623 00:04:34.702 killing process with pid 57623 00:04:34.702 13:17:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.702 13:17:23 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.702 13:17:23 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57623' 00:04:34.702 13:17:23 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57623 00:04:34.702 13:17:23 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57623 00:04:36.607 ************************************ 00:04:36.607 END TEST dpdk_mem_utility 00:04:36.607 ************************************ 00:04:36.607 00:04:36.607 real 0m3.282s 00:04:36.607 user 0m3.368s 00:04:36.607 sys 0m0.591s 00:04:36.607 13:17:24 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.607 13:17:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:36.607 13:17:24 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:36.607 13:17:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.607 13:17:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.607 13:17:24 -- common/autotest_common.sh@10 -- # set +x 00:04:36.607 ************************************ 00:04:36.607 START TEST event 00:04:36.607 ************************************ 00:04:36.607 13:17:24 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:36.607 * Looking for test storage... 00:04:36.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:36.607 13:17:24 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:36.607 13:17:24 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.607 13:17:24 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:36.607 13:17:25 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.607 13:17:25 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.607 13:17:25 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.607 13:17:25 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.607 13:17:25 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.607 13:17:25 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.607 13:17:25 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.607 13:17:25 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.607 13:17:25 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.607 13:17:25 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.607 13:17:25 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.607 13:17:25 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.607 13:17:25 event -- scripts/common.sh@344 -- # case "$op" in 00:04:36.607 13:17:25 event -- scripts/common.sh@345 -- # : 1 00:04:36.607 13:17:25 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.607 13:17:25 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.607 13:17:25 event -- scripts/common.sh@365 -- # decimal 1 00:04:36.607 13:17:25 event -- scripts/common.sh@353 -- # local d=1 00:04:36.607 13:17:25 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.607 13:17:25 event -- scripts/common.sh@355 -- # echo 1 00:04:36.607 13:17:25 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.607 13:17:25 event -- scripts/common.sh@366 -- # decimal 2 00:04:36.607 13:17:25 event -- scripts/common.sh@353 -- # local d=2 00:04:36.607 13:17:25 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.607 13:17:25 event -- scripts/common.sh@355 -- # echo 2 00:04:36.607 13:17:25 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.607 13:17:25 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.607 13:17:25 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.607 13:17:25 event -- scripts/common.sh@368 -- # return 0 00:04:36.607 13:17:25 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.607 13:17:25 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.607 --rc genhtml_branch_coverage=1 00:04:36.607 --rc genhtml_function_coverage=1 00:04:36.607 --rc genhtml_legend=1 00:04:36.607 --rc geninfo_all_blocks=1 00:04:36.607 --rc geninfo_unexecuted_blocks=1 00:04:36.607 00:04:36.607 ' 00:04:36.607 13:17:25 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.607 --rc genhtml_branch_coverage=1 00:04:36.607 --rc genhtml_function_coverage=1 00:04:36.607 --rc genhtml_legend=1 00:04:36.607 --rc geninfo_all_blocks=1 00:04:36.607 --rc geninfo_unexecuted_blocks=1 00:04:36.607 00:04:36.607 ' 00:04:36.607 13:17:25 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.607 --rc genhtml_branch_coverage=1 00:04:36.607 --rc genhtml_function_coverage=1 00:04:36.607 --rc genhtml_legend=1 00:04:36.607 --rc geninfo_all_blocks=1 00:04:36.607 --rc geninfo_unexecuted_blocks=1 00:04:36.607 00:04:36.607 ' 00:04:36.607 13:17:25 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.607 --rc genhtml_branch_coverage=1 00:04:36.607 --rc genhtml_function_coverage=1 00:04:36.607 --rc genhtml_legend=1 00:04:36.607 --rc geninfo_all_blocks=1 00:04:36.607 --rc geninfo_unexecuted_blocks=1 00:04:36.607 00:04:36.607 ' 00:04:36.607 13:17:25 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:36.607 13:17:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:36.607 13:17:25 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:36.607 13:17:25 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:36.607 13:17:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.607 13:17:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.607 ************************************ 00:04:36.607 START TEST event_perf 00:04:36.607 ************************************ 00:04:36.607 13:17:25 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:36.607 Running I/O for 1 seconds...[2024-11-26 13:17:25.140643] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:36.607 [2024-11-26 13:17:25.140984] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57731 ] 00:04:36.866 [2024-11-26 13:17:25.325821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:37.125 [2024-11-26 13:17:25.436158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.125 [2024-11-26 13:17:25.436310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:37.125 [2024-11-26 13:17:25.437285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:37.125 Running I/O for 1 seconds...[2024-11-26 13:17:25.437287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.094 00:04:38.094 lcore 0: 133619 00:04:38.094 lcore 1: 133619 00:04:38.094 lcore 2: 133617 00:04:38.094 lcore 3: 133618 00:04:38.094 done. 00:04:38.094 00:04:38.094 real 0m1.557s 00:04:38.094 user 0m4.309s 00:04:38.094 sys 0m0.125s 00:04:38.094 13:17:26 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.094 13:17:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:38.094 ************************************ 00:04:38.094 END TEST event_perf 00:04:38.094 ************************************ 00:04:38.352 13:17:26 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:38.352 13:17:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:38.352 13:17:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.352 13:17:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.352 ************************************ 00:04:38.352 START TEST event_reactor 00:04:38.352 ************************************ 00:04:38.352 13:17:26 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:38.352 [2024-11-26 13:17:26.758827] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:38.352 [2024-11-26 13:17:26.759153] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57765 ] 00:04:38.611 [2024-11-26 13:17:26.940591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.611 [2024-11-26 13:17:27.060444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.992 test_start 00:04:39.992 oneshot 00:04:39.992 tick 100 00:04:39.992 tick 100 00:04:39.992 tick 250 00:04:39.992 tick 100 00:04:39.992 tick 100 00:04:39.992 tick 100 00:04:39.992 tick 250 00:04:39.992 tick 500 00:04:39.992 tick 100 00:04:39.992 tick 100 00:04:39.992 tick 250 00:04:39.992 tick 100 00:04:39.992 tick 100 00:04:39.992 test_end 00:04:39.992 00:04:39.992 real 0m1.545s 00:04:39.992 user 0m1.335s 00:04:39.992 sys 0m0.101s 00:04:39.992 ************************************ 00:04:39.992 END TEST event_reactor 00:04:39.992 ************************************ 00:04:39.992 13:17:28 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.992 13:17:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:39.992 13:17:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:39.992 13:17:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:39.992 13:17:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.992 13:17:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.992 ************************************ 00:04:39.992 START TEST event_reactor_perf 00:04:39.992 ************************************ 00:04:39.992 13:17:28 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:39.992 [2024-11-26 13:17:28.358182] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:39.992 [2024-11-26 13:17:28.358744] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57807 ] 00:04:39.992 [2024-11-26 13:17:28.546795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.252 [2024-11-26 13:17:28.666163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.632 test_start 00:04:41.632 test_end 00:04:41.632 Performance: 363144 events per second 00:04:41.632 00:04:41.632 real 0m1.545s 00:04:41.632 user 0m1.322s 00:04:41.632 sys 0m0.115s 00:04:41.632 13:17:29 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.632 ************************************ 00:04:41.632 END TEST event_reactor_perf 00:04:41.632 ************************************ 00:04:41.632 13:17:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:41.632 13:17:29 event -- event/event.sh@49 -- # uname -s 00:04:41.632 13:17:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:41.632 13:17:29 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:41.632 13:17:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.632 13:17:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.632 13:17:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.632 ************************************ 00:04:41.632 START TEST event_scheduler 00:04:41.632 ************************************ 00:04:41.632 13:17:29 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:41.632 * Looking for test storage... 00:04:41.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:41.632 13:17:30 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.632 13:17:30 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.632 13:17:30 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.632 13:17:30 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.632 13:17:30 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:41.632 13:17:30 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.632 13:17:30 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.632 --rc genhtml_branch_coverage=1 00:04:41.632 --rc genhtml_function_coverage=1 00:04:41.632 --rc genhtml_legend=1 00:04:41.632 --rc geninfo_all_blocks=1 00:04:41.632 --rc geninfo_unexecuted_blocks=1 00:04:41.632 00:04:41.632 ' 00:04:41.632 13:17:30 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.632 --rc genhtml_branch_coverage=1 00:04:41.632 --rc genhtml_function_coverage=1 00:04:41.632 --rc genhtml_legend=1 00:04:41.632 --rc geninfo_all_blocks=1 00:04:41.633 --rc geninfo_unexecuted_blocks=1 00:04:41.633 00:04:41.633 ' 00:04:41.633 13:17:30 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.633 --rc genhtml_branch_coverage=1 00:04:41.633 --rc genhtml_function_coverage=1 00:04:41.633 --rc genhtml_legend=1 00:04:41.633 --rc geninfo_all_blocks=1 00:04:41.633 --rc geninfo_unexecuted_blocks=1 00:04:41.633 00:04:41.633 ' 00:04:41.633 13:17:30 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.633 --rc genhtml_branch_coverage=1 00:04:41.633 --rc genhtml_function_coverage=1 00:04:41.633 --rc genhtml_legend=1 00:04:41.633 --rc geninfo_all_blocks=1 00:04:41.633 --rc geninfo_unexecuted_blocks=1 00:04:41.633 00:04:41.633 ' 00:04:41.633 13:17:30 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:41.633 13:17:30 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57878 00:04:41.633 13:17:30 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:41.633 13:17:30 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.633 13:17:30 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57878 00:04:41.633 13:17:30 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 57878 ']' 00:04:41.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.633 13:17:30 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.633 13:17:30 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.633 13:17:30 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.633 13:17:30 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.633 13:17:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:41.891 [2024-11-26 13:17:30.221533] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:41.891 [2024-11-26 13:17:30.221719] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57878 ] 00:04:41.891 [2024-11-26 13:17:30.411434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:42.150 [2024-11-26 13:17:30.566130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.150 [2024-11-26 13:17:30.566330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.150 [2024-11-26 13:17:30.566420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:42.150 [2024-11-26 13:17:30.566424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.717 13:17:31 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.717 13:17:31 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:42.717 13:17:31 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:42.717 13:17:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.717 13:17:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.717 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:42.717 POWER: Cannot set governor of lcore 0 to userspace 00:04:42.717 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:42.717 POWER: Cannot set governor of lcore 0 to performance 00:04:42.717 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:42.717 POWER: Cannot set governor of lcore 0 to userspace 00:04:42.717 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:42.717 POWER: Cannot set governor of lcore 0 to userspace 00:04:42.717 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:42.717 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:42.717 POWER: Unable to set Power Management Environment for lcore 0 00:04:42.717 [2024-11-26 13:17:31.203649] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:42.717 [2024-11-26 13:17:31.203823] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:42.717 [2024-11-26 13:17:31.203931] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:42.717 [2024-11-26 13:17:31.204042] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:42.717 [2024-11-26 13:17:31.204153] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:42.718 [2024-11-26 13:17:31.204205] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:42.718 13:17:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.718 13:17:31 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:42.718 13:17:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.718 13:17:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.977 [2024-11-26 13:17:31.475965] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:42.977 13:17:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.977 13:17:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:42.977 13:17:31 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.977 13:17:31 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.977 13:17:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.977 ************************************ 00:04:42.977 START TEST scheduler_create_thread 00:04:42.977 ************************************ 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.977 2 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.977 3 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.977 4 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.977 5 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.977 6 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.977 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.236 7 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.236 8 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.236 9 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.236 10 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:43.236 13:17:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:43.237 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.237 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.237 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.237 13:17:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:43.237 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.237 13:17:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.173 13:17:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.173 13:17:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:44.173 13:17:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:44.173 13:17:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.173 13:17:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.109 ************************************ 00:04:45.109 END TEST scheduler_create_thread 00:04:45.109 ************************************ 00:04:45.109 13:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.109 00:04:45.109 real 0m2.137s 00:04:45.109 user 0m0.016s 00:04:45.109 sys 0m0.008s 00:04:45.109 13:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.109 13:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.369 13:17:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:45.369 13:17:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57878 00:04:45.369 13:17:33 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 57878 ']' 00:04:45.369 13:17:33 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 57878 00:04:45.369 13:17:33 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:45.369 13:17:33 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.369 13:17:33 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57878 00:04:45.369 killing process with pid 57878 00:04:45.369 13:17:33 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:45.369 13:17:33 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:45.369 13:17:33 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57878' 00:04:45.369 13:17:33 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 57878 00:04:45.369 13:17:33 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 57878 00:04:45.628 [2024-11-26 13:17:34.107199] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:46.566 ************************************ 00:04:46.566 END TEST event_scheduler 00:04:46.566 ************************************ 00:04:46.566 00:04:46.566 real 0m5.049s 00:04:46.566 user 0m8.630s 00:04:46.566 sys 0m0.522s 00:04:46.566 13:17:34 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.566 13:17:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.566 13:17:35 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:46.566 13:17:35 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:46.566 13:17:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.566 13:17:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.566 13:17:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.566 ************************************ 00:04:46.566 START TEST app_repeat 00:04:46.566 ************************************ 00:04:46.566 13:17:35 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:46.566 13:17:35 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.566 13:17:35 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.566 13:17:35 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:46.566 13:17:35 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.566 13:17:35 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:46.566 13:17:35 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:46.566 13:17:35 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:46.566 Process app_repeat pid: 57978 00:04:46.566 spdk_app_start Round 0 00:04:46.566 13:17:35 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57978 00:04:46.566 13:17:35 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.566 13:17:35 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57978' 00:04:46.566 13:17:35 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:46.566 13:17:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:46.566 13:17:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:46.566 13:17:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57978 /var/tmp/spdk-nbd.sock 00:04:46.566 13:17:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57978 ']' 00:04:46.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.566 13:17:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.566 13:17:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.566 13:17:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.566 13:17:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.566 13:17:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.566 [2024-11-26 13:17:35.088788] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:46.566 [2024-11-26 13:17:35.088924] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57978 ] 00:04:46.824 [2024-11-26 13:17:35.253791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.824 [2024-11-26 13:17:35.366332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.824 [2024-11-26 13:17:35.366338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.760 13:17:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.760 13:17:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:47.760 13:17:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.760 Malloc0 00:04:47.760 13:17:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.328 Malloc1 00:04:48.328 13:17:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.328 13:17:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.328 13:17:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.328 13:17:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:48.328 13:17:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.328 13:17:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:48.328 13:17:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.328 13:17:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.328 13:17:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.328 13:17:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:48.328 13:17:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.329 13:17:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:48.329 13:17:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:48.329 13:17:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:48.329 13:17:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.329 13:17:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:48.587 /dev/nbd0 00:04:48.587 13:17:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:48.587 13:17:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:48.587 13:17:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:48.587 13:17:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:48.587 13:17:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:48.587 13:17:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:48.587 13:17:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:48.587 13:17:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:48.587 13:17:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:48.587 13:17:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:48.587 13:17:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.587 1+0 records in 00:04:48.587 1+0 records out 00:04:48.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019879 s, 20.6 MB/s 00:04:48.587 13:17:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.587 13:17:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:48.587 13:17:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.587 13:17:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:48.587 13:17:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:48.587 13:17:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.587 13:17:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.587 13:17:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:48.587 /dev/nbd1 00:04:48.846 13:17:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:48.846 13:17:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:48.846 13:17:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:48.846 13:17:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:48.846 13:17:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:48.846 13:17:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:48.846 13:17:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:48.846 13:17:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:48.846 13:17:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:48.846 13:17:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:48.846 13:17:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.846 1+0 records in 00:04:48.846 1+0 records out 00:04:48.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281996 s, 14.5 MB/s 00:04:48.846 13:17:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.846 13:17:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:48.846 13:17:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.846 13:17:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:48.846 13:17:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:48.846 13:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.846 13:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.846 13:17:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.846 13:17:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.846 13:17:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.846 13:17:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:48.846 { 00:04:48.846 "nbd_device": "/dev/nbd0", 00:04:48.846 "bdev_name": "Malloc0" 00:04:48.846 }, 00:04:48.846 { 00:04:48.846 "nbd_device": "/dev/nbd1", 00:04:48.846 "bdev_name": "Malloc1" 00:04:48.846 } 00:04:48.846 ]' 00:04:48.846 13:17:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:48.846 { 00:04:48.846 "nbd_device": "/dev/nbd0", 00:04:48.846 "bdev_name": "Malloc0" 00:04:48.846 }, 00:04:48.846 { 00:04:48.846 "nbd_device": "/dev/nbd1", 00:04:48.846 "bdev_name": "Malloc1" 00:04:48.846 } 00:04:48.846 ]' 00:04:48.846 13:17:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:49.104 /dev/nbd1' 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:49.104 /dev/nbd1' 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:49.104 256+0 records in 00:04:49.104 256+0 records out 00:04:49.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00763577 s, 137 MB/s 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:49.104 256+0 records in 00:04:49.104 256+0 records out 00:04:49.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263732 s, 39.8 MB/s 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.104 13:17:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:49.105 256+0 records in 00:04:49.105 256+0 records out 00:04:49.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290344 s, 36.1 MB/s 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.105 13:17:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:49.363 13:17:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:49.363 13:17:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:49.363 13:17:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:49.363 13:17:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.363 13:17:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.363 13:17:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:49.363 13:17:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.363 13:17:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.363 13:17:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.363 13:17:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:49.621 13:17:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:49.621 13:17:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:49.621 13:17:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:49.621 13:17:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.621 13:17:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.621 13:17:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:49.621 13:17:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.621 13:17:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.621 13:17:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.621 13:17:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.621 13:17:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.880 13:17:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:49.880 13:17:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.880 13:17:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:49.880 13:17:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:49.880 13:17:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:49.880 13:17:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.880 13:17:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:49.880 13:17:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:49.880 13:17:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:49.880 13:17:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:49.880 13:17:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:49.880 13:17:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:49.880 13:17:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:50.448 13:17:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:51.386 [2024-11-26 13:17:39.701524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.386 [2024-11-26 13:17:39.801527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.386 [2024-11-26 13:17:39.801551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.645 [2024-11-26 13:17:39.978012] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:51.645 [2024-11-26 13:17:39.978104] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:53.553 13:17:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:53.553 spdk_app_start Round 1 00:04:53.553 13:17:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:53.553 13:17:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57978 /var/tmp/spdk-nbd.sock 00:04:53.553 13:17:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57978 ']' 00:04:53.553 13:17:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.553 13:17:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.553 13:17:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.553 13:17:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.553 13:17:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.553 13:17:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.553 13:17:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:53.553 13:17:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.813 Malloc0 00:04:53.813 13:17:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.381 Malloc1 00:04:54.381 13:17:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.381 /dev/nbd0 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.381 13:17:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.381 13:17:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:54.381 13:17:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.381 13:17:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.381 13:17:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.381 13:17:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:54.381 13:17:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.381 13:17:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.381 13:17:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.381 13:17:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.381 1+0 records in 00:04:54.381 1+0 records out 00:04:54.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303314 s, 13.5 MB/s 00:04:54.381 13:17:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.381 13:17:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.381 13:17:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.382 13:17:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.382 13:17:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.382 13:17:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.382 13:17:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.382 13:17:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.641 /dev/nbd1 00:04:54.641 13:17:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.641 13:17:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.641 13:17:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:54.641 13:17:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.641 13:17:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.641 13:17:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.641 13:17:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:54.641 13:17:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.641 13:17:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.641 13:17:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.641 13:17:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.641 1+0 records in 00:04:54.641 1+0 records out 00:04:54.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238707 s, 17.2 MB/s 00:04:54.641 13:17:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.641 13:17:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.641 13:17:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.641 13:17:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.641 13:17:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.641 13:17:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.641 13:17:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.641 13:17:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.641 13:17:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.641 13:17:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.901 { 00:04:54.901 "nbd_device": "/dev/nbd0", 00:04:54.901 "bdev_name": "Malloc0" 00:04:54.901 }, 00:04:54.901 { 00:04:54.901 "nbd_device": "/dev/nbd1", 00:04:54.901 "bdev_name": "Malloc1" 00:04:54.901 } 00:04:54.901 ]' 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.901 { 00:04:54.901 "nbd_device": "/dev/nbd0", 00:04:54.901 "bdev_name": "Malloc0" 00:04:54.901 }, 00:04:54.901 { 00:04:54.901 "nbd_device": "/dev/nbd1", 00:04:54.901 "bdev_name": "Malloc1" 00:04:54.901 } 00:04:54.901 ]' 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.901 /dev/nbd1' 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.901 /dev/nbd1' 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.901 13:17:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:55.161 256+0 records in 00:04:55.161 256+0 records out 00:04:55.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00987299 s, 106 MB/s 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.161 256+0 records in 00:04:55.161 256+0 records out 00:04:55.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02518 s, 41.6 MB/s 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:55.161 256+0 records in 00:04:55.161 256+0 records out 00:04:55.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288708 s, 36.3 MB/s 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.161 13:17:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.420 13:17:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.420 13:17:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.420 13:17:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.420 13:17:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.420 13:17:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.420 13:17:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.420 13:17:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.420 13:17:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.420 13:17:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.420 13:17:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.679 13:17:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.679 13:17:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.679 13:17:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.679 13:17:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.679 13:17:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.679 13:17:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.679 13:17:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.679 13:17:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.679 13:17:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.679 13:17:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.679 13:17:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.939 13:17:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.939 13:17:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.939 13:17:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.939 13:17:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.939 13:17:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.939 13:17:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.939 13:17:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.939 13:17:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.939 13:17:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.939 13:17:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.939 13:17:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.939 13:17:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.939 13:17:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.508 13:17:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:57.446 [2024-11-26 13:17:45.776924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.446 [2024-11-26 13:17:45.876445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.446 [2024-11-26 13:17:45.876450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.705 [2024-11-26 13:17:46.053016] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:57.705 [2024-11-26 13:17:46.053127] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.609 13:17:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.609 spdk_app_start Round 2 00:04:59.609 13:17:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:59.609 13:17:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57978 /var/tmp/spdk-nbd.sock 00:04:59.609 13:17:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57978 ']' 00:04:59.609 13:17:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.609 13:17:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.609 13:17:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.609 13:17:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.609 13:17:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.609 13:17:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.609 13:17:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:59.609 13:17:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.177 Malloc0 00:05:00.177 13:17:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.440 Malloc1 00:05:00.440 13:17:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.440 13:17:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.440 13:17:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.440 13:17:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.440 13:17:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.440 13:17:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.440 13:17:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.440 13:17:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.440 13:17:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.440 13:17:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.440 13:17:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.440 13:17:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.440 13:17:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.440 13:17:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.440 13:17:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.440 13:17:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.440 /dev/nbd0 00:05:00.698 13:17:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.698 13:17:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.698 13:17:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:00.698 13:17:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:00.698 13:17:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:00.698 13:17:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:00.698 13:17:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:00.698 13:17:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:00.698 13:17:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:00.698 13:17:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:00.698 13:17:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.698 1+0 records in 00:05:00.698 1+0 records out 00:05:00.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210315 s, 19.5 MB/s 00:05:00.698 13:17:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.698 13:17:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:00.698 13:17:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.698 13:17:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:00.698 13:17:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:00.698 13:17:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.698 13:17:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.698 13:17:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.957 /dev/nbd1 00:05:00.957 13:17:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.957 13:17:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.957 13:17:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:00.957 13:17:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:00.957 13:17:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:00.957 13:17:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:00.957 13:17:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:00.957 13:17:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:00.957 13:17:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:00.957 13:17:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:00.957 13:17:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.957 1+0 records in 00:05:00.957 1+0 records out 00:05:00.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323287 s, 12.7 MB/s 00:05:00.957 13:17:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.957 13:17:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:00.957 13:17:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.957 13:17:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:00.957 13:17:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:00.957 13:17:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.957 13:17:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.957 13:17:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.957 13:17:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.957 13:17:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.216 { 00:05:01.216 "nbd_device": "/dev/nbd0", 00:05:01.216 "bdev_name": "Malloc0" 00:05:01.216 }, 00:05:01.216 { 00:05:01.216 "nbd_device": "/dev/nbd1", 00:05:01.216 "bdev_name": "Malloc1" 00:05:01.216 } 00:05:01.216 ]' 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.216 { 00:05:01.216 "nbd_device": "/dev/nbd0", 00:05:01.216 "bdev_name": "Malloc0" 00:05:01.216 }, 00:05:01.216 { 00:05:01.216 "nbd_device": "/dev/nbd1", 00:05:01.216 "bdev_name": "Malloc1" 00:05:01.216 } 00:05:01.216 ]' 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.216 /dev/nbd1' 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.216 /dev/nbd1' 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.216 256+0 records in 00:05:01.216 256+0 records out 00:05:01.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00789772 s, 133 MB/s 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.216 256+0 records in 00:05:01.216 256+0 records out 00:05:01.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233693 s, 44.9 MB/s 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.216 256+0 records in 00:05:01.216 256+0 records out 00:05:01.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282922 s, 37.1 MB/s 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.216 13:17:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.475 13:17:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.475 13:17:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.475 13:17:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.475 13:17:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.475 13:17:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.475 13:17:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.475 13:17:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.475 13:17:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.475 13:17:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.475 13:17:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:02.060 13:17:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:02.060 13:17:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:02.060 13:17:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:02.060 13:17:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.060 13:17:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.060 13:17:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:02.060 13:17:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.060 13:17:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.060 13:17:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.060 13:17:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.060 13:17:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.060 13:17:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.060 13:17:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.060 13:17:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.375 13:17:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.375 13:17:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.375 13:17:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.375 13:17:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.375 13:17:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.375 13:17:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.375 13:17:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.375 13:17:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.375 13:17:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.375 13:17:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.639 13:17:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:03.576 [2024-11-26 13:17:51.945158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.576 [2024-11-26 13:17:52.042940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.576 [2024-11-26 13:17:52.042956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.835 [2024-11-26 13:17:52.218543] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:03.835 [2024-11-26 13:17:52.218632] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.741 13:17:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57978 /var/tmp/spdk-nbd.sock 00:05:05.741 13:17:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57978 ']' 00:05:05.741 13:17:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.741 13:17:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.741 13:17:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.741 13:17:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.741 13:17:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.741 13:17:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.741 13:17:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:05.741 13:17:54 event.app_repeat -- event/event.sh@39 -- # killprocess 57978 00:05:05.741 13:17:54 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 57978 ']' 00:05:05.741 13:17:54 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 57978 00:05:05.741 13:17:54 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:05.741 13:17:54 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.741 13:17:54 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57978 00:05:06.000 13:17:54 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.000 13:17:54 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.000 killing process with pid 57978 00:05:06.000 13:17:54 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57978' 00:05:06.000 13:17:54 event.app_repeat -- common/autotest_common.sh@973 -- # kill 57978 00:05:06.000 13:17:54 event.app_repeat -- common/autotest_common.sh@978 -- # wait 57978 00:05:06.568 spdk_app_start is called in Round 0. 00:05:06.568 Shutdown signal received, stop current app iteration 00:05:06.568 Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 reinitialization... 00:05:06.568 spdk_app_start is called in Round 1. 00:05:06.568 Shutdown signal received, stop current app iteration 00:05:06.568 Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 reinitialization... 00:05:06.568 spdk_app_start is called in Round 2. 00:05:06.568 Shutdown signal received, stop current app iteration 00:05:06.568 Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 reinitialization... 00:05:06.568 spdk_app_start is called in Round 3. 00:05:06.568 Shutdown signal received, stop current app iteration 00:05:06.827 13:17:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:06.827 13:17:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:06.827 00:05:06.827 real 0m20.112s 00:05:06.827 user 0m44.256s 00:05:06.827 sys 0m2.684s 00:05:06.827 13:17:55 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.827 13:17:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.827 ************************************ 00:05:06.827 END TEST app_repeat 00:05:06.828 ************************************ 00:05:06.828 13:17:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:06.828 13:17:55 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:06.828 13:17:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.828 13:17:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.828 13:17:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.828 ************************************ 00:05:06.828 START TEST cpu_locks 00:05:06.828 ************************************ 00:05:06.828 13:17:55 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:06.828 * Looking for test storage... 00:05:06.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:06.828 13:17:55 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.828 13:17:55 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.828 13:17:55 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.828 13:17:55 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.828 13:17:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:06.828 13:17:55 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.828 13:17:55 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.828 --rc genhtml_branch_coverage=1 00:05:06.828 --rc genhtml_function_coverage=1 00:05:06.828 --rc genhtml_legend=1 00:05:06.828 --rc geninfo_all_blocks=1 00:05:06.828 --rc geninfo_unexecuted_blocks=1 00:05:06.828 00:05:06.828 ' 00:05:06.828 13:17:55 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.828 --rc genhtml_branch_coverage=1 00:05:06.828 --rc genhtml_function_coverage=1 00:05:06.828 --rc genhtml_legend=1 00:05:06.828 --rc geninfo_all_blocks=1 00:05:06.828 --rc geninfo_unexecuted_blocks=1 00:05:06.828 00:05:06.828 ' 00:05:06.828 13:17:55 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.828 --rc genhtml_branch_coverage=1 00:05:06.828 --rc genhtml_function_coverage=1 00:05:06.828 --rc genhtml_legend=1 00:05:06.828 --rc geninfo_all_blocks=1 00:05:06.828 --rc geninfo_unexecuted_blocks=1 00:05:06.828 00:05:06.828 ' 00:05:06.828 13:17:55 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.828 --rc genhtml_branch_coverage=1 00:05:06.828 --rc genhtml_function_coverage=1 00:05:06.828 --rc genhtml_legend=1 00:05:06.828 --rc geninfo_all_blocks=1 00:05:06.828 --rc geninfo_unexecuted_blocks=1 00:05:06.828 00:05:06.828 ' 00:05:06.828 13:17:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:06.828 13:17:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:06.828 13:17:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:06.828 13:17:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:06.828 13:17:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.828 13:17:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.828 13:17:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.828 ************************************ 00:05:06.828 START TEST default_locks 00:05:06.828 ************************************ 00:05:06.828 13:17:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:06.828 13:17:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58436 00:05:06.828 13:17:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58436 00:05:06.828 13:17:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58436 ']' 00:05:06.828 13:17:55 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.828 13:17:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.828 13:17:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.828 13:17:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.828 13:17:55 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.828 13:17:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.087 [2024-11-26 13:17:55.509403] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:07.087 [2024-11-26 13:17:55.509614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58436 ] 00:05:07.345 [2024-11-26 13:17:55.686486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.345 [2024-11-26 13:17:55.797694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.282 13:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.282 13:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:08.282 13:17:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58436 00:05:08.282 13:17:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58436 00:05:08.282 13:17:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.541 13:17:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58436 00:05:08.541 13:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58436 ']' 00:05:08.542 13:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58436 00:05:08.542 13:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:08.542 13:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.542 13:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58436 00:05:08.542 13:17:57 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.542 13:17:57 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.542 killing process with pid 58436 00:05:08.542 13:17:57 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58436' 00:05:08.542 13:17:57 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58436 00:05:08.542 13:17:57 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58436 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58436 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58436 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58436 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58436 ']' 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.449 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58436) - No such process 00:05:10.449 ERROR: process (pid: 58436) is no longer running 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:10.449 00:05:10.449 real 0m3.560s 00:05:10.449 user 0m3.463s 00:05:10.449 sys 0m0.761s 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.449 13:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.449 ************************************ 00:05:10.449 END TEST default_locks 00:05:10.449 ************************************ 00:05:10.449 13:17:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:10.449 13:17:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.449 13:17:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.449 13:17:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.449 ************************************ 00:05:10.449 START TEST default_locks_via_rpc 00:05:10.449 ************************************ 00:05:10.449 13:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:10.449 13:17:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58506 00:05:10.449 13:17:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58506 00:05:10.449 13:17:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.449 13:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58506 ']' 00:05:10.449 13:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.449 13:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.449 13:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.449 13:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.449 13:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.708 [2024-11-26 13:17:59.126671] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:10.708 [2024-11-26 13:17:59.126849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58506 ] 00:05:10.966 [2024-11-26 13:17:59.307600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.966 [2024-11-26 13:17:59.418881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58506 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58506 00:05:11.903 13:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.162 13:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58506 00:05:12.162 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58506 ']' 00:05:12.162 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58506 00:05:12.162 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:12.162 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.162 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58506 00:05:12.162 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.162 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.162 killing process with pid 58506 00:05:12.162 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58506' 00:05:12.162 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58506 00:05:12.162 13:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58506 00:05:14.068 00:05:14.068 real 0m3.587s 00:05:14.068 user 0m3.502s 00:05:14.068 sys 0m0.799s 00:05:14.068 13:18:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.068 13:18:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.068 ************************************ 00:05:14.068 END TEST default_locks_via_rpc 00:05:14.068 ************************************ 00:05:14.068 13:18:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:14.068 13:18:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.068 13:18:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.068 13:18:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.327 ************************************ 00:05:14.327 START TEST non_locking_app_on_locked_coremask 00:05:14.327 ************************************ 00:05:14.327 13:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:14.327 13:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58574 00:05:14.327 13:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58574 /var/tmp/spdk.sock 00:05:14.327 13:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58574 ']' 00:05:14.327 13:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.327 13:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.327 13:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.327 13:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.327 13:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.327 13:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.327 [2024-11-26 13:18:02.737213] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:14.327 [2024-11-26 13:18:02.737377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58574 ] 00:05:14.586 [2024-11-26 13:18:02.900153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.586 [2024-11-26 13:18:03.011569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.523 13:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.523 13:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:15.523 13:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58590 00:05:15.523 13:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:15.523 13:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58590 /var/tmp/spdk2.sock 00:05:15.523 13:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58590 ']' 00:05:15.523 13:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.523 13:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.523 13:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.523 13:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.523 13:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.523 [2024-11-26 13:18:03.915119] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:15.523 [2024-11-26 13:18:03.915320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58590 ] 00:05:15.783 [2024-11-26 13:18:04.109971] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.783 [2024-11-26 13:18:04.110028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.783 [2024-11-26 13:18:04.334801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.321 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.321 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:18.321 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58574 00:05:18.321 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58574 00:05:18.321 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.579 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58574 00:05:18.579 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58574 ']' 00:05:18.579 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58574 00:05:18.579 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:18.579 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.840 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58574 00:05:18.840 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.840 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.840 killing process with pid 58574 00:05:18.840 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58574' 00:05:18.840 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58574 00:05:18.840 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58574 00:05:23.035 13:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58590 00:05:23.035 13:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58590 ']' 00:05:23.035 13:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58590 00:05:23.035 13:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:23.035 13:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.035 13:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58590 00:05:23.035 13:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.035 13:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.035 killing process with pid 58590 00:05:23.035 13:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58590' 00:05:23.035 13:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58590 00:05:23.035 13:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58590 00:05:24.413 00:05:24.413 real 0m10.319s 00:05:24.413 user 0m10.691s 00:05:24.413 sys 0m1.427s 00:05:24.413 13:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.413 13:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.413 ************************************ 00:05:24.413 END TEST non_locking_app_on_locked_coremask 00:05:24.413 ************************************ 00:05:24.672 13:18:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:24.672 13:18:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.672 13:18:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.672 13:18:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.672 ************************************ 00:05:24.672 START TEST locking_app_on_unlocked_coremask 00:05:24.672 ************************************ 00:05:24.672 13:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:24.672 13:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58727 00:05:24.672 13:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:24.672 13:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58727 /var/tmp/spdk.sock 00:05:24.672 13:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58727 ']' 00:05:24.672 13:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.672 13:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.672 13:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.672 13:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.672 13:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.672 [2024-11-26 13:18:13.140604] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:24.672 [2024-11-26 13:18:13.140791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58727 ] 00:05:24.931 [2024-11-26 13:18:13.329275] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.931 [2024-11-26 13:18:13.329344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.931 [2024-11-26 13:18:13.479552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.867 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.867 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:25.867 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58749 00:05:25.867 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:25.867 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58749 /var/tmp/spdk2.sock 00:05:25.867 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58749 ']' 00:05:25.867 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.867 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.867 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.867 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.867 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.867 [2024-11-26 13:18:14.385595] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:25.867 [2024-11-26 13:18:14.385801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58749 ] 00:05:26.126 [2024-11-26 13:18:14.578615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.386 [2024-11-26 13:18:14.803740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.922 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.922 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:28.922 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58749 00:05:28.922 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58749 00:05:28.922 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.491 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58727 00:05:29.491 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58727 ']' 00:05:29.491 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58727 00:05:29.491 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:29.491 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.491 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58727 00:05:29.491 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.491 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.491 killing process with pid 58727 00:05:29.491 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58727' 00:05:29.491 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58727 00:05:29.491 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58727 00:05:33.728 13:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58749 00:05:33.728 13:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58749 ']' 00:05:33.728 13:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58749 00:05:33.728 13:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:33.728 13:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.728 13:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58749 00:05:33.728 13:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.728 13:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.728 killing process with pid 58749 00:05:33.728 13:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58749' 00:05:33.728 13:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58749 00:05:33.728 13:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58749 00:05:35.632 00:05:35.632 real 0m10.705s 00:05:35.632 user 0m11.032s 00:05:35.632 sys 0m1.585s 00:05:35.632 13:18:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.632 13:18:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.632 ************************************ 00:05:35.632 END TEST locking_app_on_unlocked_coremask 00:05:35.632 ************************************ 00:05:35.632 13:18:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:35.632 13:18:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.632 13:18:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.632 13:18:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.632 ************************************ 00:05:35.632 START TEST locking_app_on_locked_coremask 00:05:35.632 ************************************ 00:05:35.632 13:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:35.632 13:18:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58886 00:05:35.632 13:18:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58886 /var/tmp/spdk.sock 00:05:35.632 13:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58886 ']' 00:05:35.632 13:18:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.632 13:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.632 13:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.632 13:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.632 13:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.632 13:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.632 [2024-11-26 13:18:23.901412] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:35.632 [2024-11-26 13:18:23.901640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58886 ] 00:05:35.632 [2024-11-26 13:18:24.074538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.632 [2024-11-26 13:18:24.183671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58902 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58902 /var/tmp/spdk2.sock 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58902 /var/tmp/spdk2.sock 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58902 /var/tmp/spdk2.sock 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58902 ']' 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.571 13:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.571 [2024-11-26 13:18:25.093432] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:36.571 [2024-11-26 13:18:25.093647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58902 ] 00:05:36.830 [2024-11-26 13:18:25.282752] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58886 has claimed it. 00:05:36.830 [2024-11-26 13:18:25.282830] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:37.399 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58902) - No such process 00:05:37.399 ERROR: process (pid: 58902) is no longer running 00:05:37.399 13:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.399 13:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:37.399 13:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:37.399 13:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:37.399 13:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:37.399 13:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:37.399 13:18:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58886 00:05:37.399 13:18:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58886 00:05:37.399 13:18:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.659 13:18:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58886 00:05:37.659 13:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58886 ']' 00:05:37.659 13:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58886 00:05:37.659 13:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:37.659 13:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.659 13:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58886 00:05:37.659 13:18:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.659 13:18:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.659 killing process with pid 58886 00:05:37.659 13:18:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58886' 00:05:37.659 13:18:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58886 00:05:37.659 13:18:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58886 00:05:39.566 00:05:39.566 real 0m4.185s 00:05:39.566 user 0m4.390s 00:05:39.566 sys 0m0.865s 00:05:39.566 13:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.566 13:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.566 ************************************ 00:05:39.566 END TEST locking_app_on_locked_coremask 00:05:39.566 ************************************ 00:05:39.566 13:18:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:39.566 13:18:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.566 13:18:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.566 13:18:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.566 ************************************ 00:05:39.566 START TEST locking_overlapped_coremask 00:05:39.566 ************************************ 00:05:39.566 13:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:39.566 13:18:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58966 00:05:39.566 13:18:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58966 /var/tmp/spdk.sock 00:05:39.566 13:18:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:39.566 13:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58966 ']' 00:05:39.566 13:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.566 13:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.566 13:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.566 13:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.566 13:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.825 [2024-11-26 13:18:28.139023] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:39.825 [2024-11-26 13:18:28.139214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58966 ] 00:05:39.825 [2024-11-26 13:18:28.321798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.084 [2024-11-26 13:18:28.440860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.084 [2024-11-26 13:18:28.440998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.084 [2024-11-26 13:18:28.441012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58986 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58986 /var/tmp/spdk2.sock 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58986 /var/tmp/spdk2.sock 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58986 /var/tmp/spdk2.sock 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58986 ']' 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.021 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.021 [2024-11-26 13:18:29.354893] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:41.021 [2024-11-26 13:18:29.355081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58986 ] 00:05:41.021 [2024-11-26 13:18:29.543877] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58966 has claimed it. 00:05:41.021 [2024-11-26 13:18:29.544122] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:41.589 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58986) - No such process 00:05:41.589 ERROR: process (pid: 58986) is no longer running 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58966 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58966 ']' 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58966 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58966 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.589 killing process with pid 58966 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58966' 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58966 00:05:41.589 13:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58966 00:05:43.497 00:05:43.497 real 0m3.920s 00:05:43.497 user 0m10.466s 00:05:43.497 sys 0m0.713s 00:05:43.497 13:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.497 13:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.497 ************************************ 00:05:43.497 END TEST locking_overlapped_coremask 00:05:43.497 ************************************ 00:05:43.497 13:18:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:43.497 13:18:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.497 13:18:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.497 13:18:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.497 ************************************ 00:05:43.497 START TEST locking_overlapped_coremask_via_rpc 00:05:43.497 ************************************ 00:05:43.497 13:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:43.497 13:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59051 00:05:43.497 13:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59051 /var/tmp/spdk.sock 00:05:43.497 13:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59051 ']' 00:05:43.497 13:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:43.497 13:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.497 13:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.497 13:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.497 13:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.497 13:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.756 [2024-11-26 13:18:32.120641] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:43.756 [2024-11-26 13:18:32.120844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59051 ] 00:05:43.756 [2024-11-26 13:18:32.299927] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.756 [2024-11-26 13:18:32.300003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.015 [2024-11-26 13:18:32.418641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.015 [2024-11-26 13:18:32.418783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.015 [2024-11-26 13:18:32.418789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.953 13:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.953 13:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:44.953 13:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59069 00:05:44.953 13:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:44.953 13:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59069 /var/tmp/spdk2.sock 00:05:44.953 13:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59069 ']' 00:05:44.953 13:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.953 13:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.953 13:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.953 13:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.953 13:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.953 [2024-11-26 13:18:33.341575] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:44.953 [2024-11-26 13:18:33.341783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59069 ] 00:05:45.212 [2024-11-26 13:18:33.522472] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.212 [2024-11-26 13:18:33.522520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.212 [2024-11-26 13:18:33.756698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.212 [2024-11-26 13:18:33.756842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.212 [2024-11-26 13:18:33.756857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.746 [2024-11-26 13:18:36.060524] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59051 has claimed it. 00:05:47.746 request: 00:05:47.746 { 00:05:47.746 "method": "framework_enable_cpumask_locks", 00:05:47.746 "req_id": 1 00:05:47.746 } 00:05:47.746 Got JSON-RPC error response 00:05:47.746 response: 00:05:47.746 { 00:05:47.746 "code": -32603, 00:05:47.746 "message": "Failed to claim CPU core: 2" 00:05:47.746 } 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59051 /var/tmp/spdk.sock 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59051 ']' 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59069 /var/tmp/spdk2.sock 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59069 ']' 00:05:47.746 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.747 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.747 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.747 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.747 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.316 ************************************ 00:05:48.316 END TEST locking_overlapped_coremask_via_rpc 00:05:48.316 ************************************ 00:05:48.316 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.316 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:48.316 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:48.316 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:48.316 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:48.316 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:48.316 00:05:48.316 real 0m4.604s 00:05:48.316 user 0m1.562s 00:05:48.316 sys 0m0.225s 00:05:48.316 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.316 13:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.316 13:18:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:48.316 13:18:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59051 ]] 00:05:48.316 13:18:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59051 00:05:48.316 13:18:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59051 ']' 00:05:48.316 13:18:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59051 00:05:48.316 13:18:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:48.316 13:18:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.316 13:18:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59051 00:05:48.316 killing process with pid 59051 00:05:48.316 13:18:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.316 13:18:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.316 13:18:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59051' 00:05:48.316 13:18:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59051 00:05:48.316 13:18:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59051 00:05:50.220 13:18:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59069 ]] 00:05:50.220 13:18:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59069 00:05:50.220 13:18:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59069 ']' 00:05:50.220 13:18:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59069 00:05:50.220 13:18:38 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:50.220 13:18:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.220 13:18:38 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59069 00:05:50.220 killing process with pid 59069 00:05:50.220 13:18:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:50.220 13:18:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:50.220 13:18:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59069' 00:05:50.220 13:18:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59069 00:05:50.220 13:18:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59069 00:05:52.125 13:18:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:52.125 13:18:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:52.125 13:18:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59051 ]] 00:05:52.125 13:18:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59051 00:05:52.125 Process with pid 59051 is not found 00:05:52.125 Process with pid 59069 is not found 00:05:52.125 13:18:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59051 ']' 00:05:52.125 13:18:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59051 00:05:52.125 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59051) - No such process 00:05:52.125 13:18:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59051 is not found' 00:05:52.125 13:18:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59069 ]] 00:05:52.125 13:18:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59069 00:05:52.125 13:18:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59069 ']' 00:05:52.125 13:18:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59069 00:05:52.125 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59069) - No such process 00:05:52.125 13:18:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59069 is not found' 00:05:52.125 13:18:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:52.125 ************************************ 00:05:52.125 END TEST cpu_locks 00:05:52.125 ************************************ 00:05:52.125 00:05:52.125 real 0m45.357s 00:05:52.125 user 1m18.131s 00:05:52.125 sys 0m7.687s 00:05:52.125 13:18:40 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.125 13:18:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.125 ************************************ 00:05:52.125 END TEST event 00:05:52.125 ************************************ 00:05:52.125 00:05:52.125 real 1m15.692s 00:05:52.125 user 2m18.196s 00:05:52.125 sys 0m11.516s 00:05:52.125 13:18:40 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.125 13:18:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.125 13:18:40 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:52.125 13:18:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.125 13:18:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.125 13:18:40 -- common/autotest_common.sh@10 -- # set +x 00:05:52.125 ************************************ 00:05:52.125 START TEST thread 00:05:52.125 ************************************ 00:05:52.125 13:18:40 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:52.384 * Looking for test storage... 00:05:52.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:52.384 13:18:40 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.384 13:18:40 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.384 13:18:40 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.384 13:18:40 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.384 13:18:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.384 13:18:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.384 13:18:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.384 13:18:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.384 13:18:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.384 13:18:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.384 13:18:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.384 13:18:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.384 13:18:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.384 13:18:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.384 13:18:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.384 13:18:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:52.384 13:18:40 thread -- scripts/common.sh@345 -- # : 1 00:05:52.384 13:18:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.384 13:18:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.384 13:18:40 thread -- scripts/common.sh@365 -- # decimal 1 00:05:52.384 13:18:40 thread -- scripts/common.sh@353 -- # local d=1 00:05:52.384 13:18:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.384 13:18:40 thread -- scripts/common.sh@355 -- # echo 1 00:05:52.384 13:18:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.384 13:18:40 thread -- scripts/common.sh@366 -- # decimal 2 00:05:52.384 13:18:40 thread -- scripts/common.sh@353 -- # local d=2 00:05:52.384 13:18:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.384 13:18:40 thread -- scripts/common.sh@355 -- # echo 2 00:05:52.384 13:18:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.384 13:18:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.384 13:18:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.384 13:18:40 thread -- scripts/common.sh@368 -- # return 0 00:05:52.384 13:18:40 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.384 13:18:40 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.384 --rc genhtml_branch_coverage=1 00:05:52.384 --rc genhtml_function_coverage=1 00:05:52.384 --rc genhtml_legend=1 00:05:52.384 --rc geninfo_all_blocks=1 00:05:52.384 --rc geninfo_unexecuted_blocks=1 00:05:52.384 00:05:52.384 ' 00:05:52.384 13:18:40 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.385 --rc genhtml_branch_coverage=1 00:05:52.385 --rc genhtml_function_coverage=1 00:05:52.385 --rc genhtml_legend=1 00:05:52.385 --rc geninfo_all_blocks=1 00:05:52.385 --rc geninfo_unexecuted_blocks=1 00:05:52.385 00:05:52.385 ' 00:05:52.385 13:18:40 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.385 --rc genhtml_branch_coverage=1 00:05:52.385 --rc genhtml_function_coverage=1 00:05:52.385 --rc genhtml_legend=1 00:05:52.385 --rc geninfo_all_blocks=1 00:05:52.385 --rc geninfo_unexecuted_blocks=1 00:05:52.385 00:05:52.385 ' 00:05:52.385 13:18:40 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.385 --rc genhtml_branch_coverage=1 00:05:52.385 --rc genhtml_function_coverage=1 00:05:52.385 --rc genhtml_legend=1 00:05:52.385 --rc geninfo_all_blocks=1 00:05:52.385 --rc geninfo_unexecuted_blocks=1 00:05:52.385 00:05:52.385 ' 00:05:52.385 13:18:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:52.385 13:18:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:52.385 13:18:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.385 13:18:40 thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.385 ************************************ 00:05:52.385 START TEST thread_poller_perf 00:05:52.385 ************************************ 00:05:52.385 13:18:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:52.385 [2024-11-26 13:18:40.875114] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:52.385 [2024-11-26 13:18:40.875429] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59261 ] 00:05:52.644 [2024-11-26 13:18:41.049059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.903 [2024-11-26 13:18:41.209781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.903 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:53.841 [2024-11-26T13:18:42.411Z] ====================================== 00:05:53.841 [2024-11-26T13:18:42.411Z] busy:2215024036 (cyc) 00:05:53.841 [2024-11-26T13:18:42.411Z] total_run_count: 372000 00:05:53.841 [2024-11-26T13:18:42.411Z] tsc_hz: 2200000000 (cyc) 00:05:53.841 [2024-11-26T13:18:42.411Z] ====================================== 00:05:53.841 [2024-11-26T13:18:42.411Z] poller_cost: 5954 (cyc), 2706 (nsec) 00:05:54.100 00:05:54.100 real 0m1.577s 00:05:54.100 user 0m1.374s 00:05:54.100 sys 0m0.093s 00:05:54.100 13:18:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.100 13:18:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.100 ************************************ 00:05:54.100 END TEST thread_poller_perf 00:05:54.100 ************************************ 00:05:54.100 13:18:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:54.100 13:18:42 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:54.100 13:18:42 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.100 13:18:42 thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.100 ************************************ 00:05:54.100 START TEST thread_poller_perf 00:05:54.100 ************************************ 00:05:54.100 13:18:42 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:54.100 [2024-11-26 13:18:42.522688] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:54.101 [2024-11-26 13:18:42.523029] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ] 00:05:54.360 [2024-11-26 13:18:42.701980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.360 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:54.360 [2024-11-26 13:18:42.823633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.739 [2024-11-26T13:18:44.309Z] ====================================== 00:05:55.739 [2024-11-26T13:18:44.309Z] busy:2203367384 (cyc) 00:05:55.739 [2024-11-26T13:18:44.309Z] total_run_count: 4891000 00:05:55.739 [2024-11-26T13:18:44.309Z] tsc_hz: 2200000000 (cyc) 00:05:55.739 [2024-11-26T13:18:44.309Z] ====================================== 00:05:55.739 [2024-11-26T13:18:44.309Z] poller_cost: 450 (cyc), 204 (nsec) 00:05:55.739 ************************************ 00:05:55.739 END TEST thread_poller_perf 00:05:55.739 ************************************ 00:05:55.739 00:05:55.739 real 0m1.542s 00:05:55.739 user 0m1.325s 00:05:55.739 sys 0m0.109s 00:05:55.739 13:18:44 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.739 13:18:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.739 13:18:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:55.739 ************************************ 00:05:55.739 END TEST thread 00:05:55.740 ************************************ 00:05:55.740 00:05:55.740 real 0m3.426s 00:05:55.740 user 0m2.841s 00:05:55.740 sys 0m0.361s 00:05:55.740 13:18:44 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.740 13:18:44 thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.740 13:18:44 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:55.740 13:18:44 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:55.740 13:18:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.740 13:18:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.740 13:18:44 -- common/autotest_common.sh@10 -- # set +x 00:05:55.740 ************************************ 00:05:55.740 START TEST app_cmdline 00:05:55.740 ************************************ 00:05:55.740 13:18:44 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:55.740 * Looking for test storage... 00:05:55.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:55.740 13:18:44 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.740 13:18:44 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.740 13:18:44 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.999 13:18:44 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.999 13:18:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:55.999 13:18:44 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.999 13:18:44 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.999 --rc genhtml_branch_coverage=1 00:05:55.999 --rc genhtml_function_coverage=1 00:05:55.999 --rc genhtml_legend=1 00:05:55.999 --rc geninfo_all_blocks=1 00:05:55.999 --rc geninfo_unexecuted_blocks=1 00:05:55.999 00:05:55.999 ' 00:05:55.999 13:18:44 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.999 --rc genhtml_branch_coverage=1 00:05:55.999 --rc genhtml_function_coverage=1 00:05:55.999 --rc genhtml_legend=1 00:05:55.999 --rc geninfo_all_blocks=1 00:05:55.999 --rc geninfo_unexecuted_blocks=1 00:05:55.999 00:05:55.999 ' 00:05:55.999 13:18:44 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.999 --rc genhtml_branch_coverage=1 00:05:55.999 --rc genhtml_function_coverage=1 00:05:55.999 --rc genhtml_legend=1 00:05:55.999 --rc geninfo_all_blocks=1 00:05:55.999 --rc geninfo_unexecuted_blocks=1 00:05:55.999 00:05:55.999 ' 00:05:55.999 13:18:44 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.999 --rc genhtml_branch_coverage=1 00:05:55.999 --rc genhtml_function_coverage=1 00:05:55.999 --rc genhtml_legend=1 00:05:55.999 --rc geninfo_all_blocks=1 00:05:55.999 --rc geninfo_unexecuted_blocks=1 00:05:55.999 00:05:55.999 ' 00:05:55.999 13:18:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:55.999 13:18:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59386 00:05:55.999 13:18:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59386 00:05:55.999 13:18:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:55.999 13:18:44 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59386 ']' 00:05:55.999 13:18:44 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.999 13:18:44 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.999 13:18:44 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.999 13:18:44 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.999 13:18:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:55.999 [2024-11-26 13:18:44.452811] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:55.999 [2024-11-26 13:18:44.453340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59386 ] 00:05:56.259 [2024-11-26 13:18:44.633685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.259 [2024-11-26 13:18:44.744603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.197 13:18:45 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.197 13:18:45 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:57.197 13:18:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:57.456 { 00:05:57.456 "version": "SPDK v25.01-pre git sha1 a9e1e4309", 00:05:57.456 "fields": { 00:05:57.456 "major": 25, 00:05:57.456 "minor": 1, 00:05:57.456 "patch": 0, 00:05:57.456 "suffix": "-pre", 00:05:57.456 "commit": "a9e1e4309" 00:05:57.456 } 00:05:57.456 } 00:05:57.456 13:18:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:57.456 13:18:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:57.456 13:18:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:57.456 13:18:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:57.456 13:18:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:57.456 13:18:45 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.456 13:18:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.456 13:18:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:57.456 13:18:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:57.456 13:18:45 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.456 13:18:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:57.456 13:18:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:57.456 13:18:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.456 13:18:45 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:57.456 13:18:45 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.456 13:18:45 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:57.456 13:18:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.456 13:18:45 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:57.456 13:18:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.456 13:18:45 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:57.456 13:18:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.457 13:18:45 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:57.457 13:18:45 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:57.457 13:18:45 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.716 request: 00:05:57.716 { 00:05:57.716 "method": "env_dpdk_get_mem_stats", 00:05:57.716 "req_id": 1 00:05:57.716 } 00:05:57.716 Got JSON-RPC error response 00:05:57.716 response: 00:05:57.716 { 00:05:57.716 "code": -32601, 00:05:57.716 "message": "Method not found" 00:05:57.716 } 00:05:57.716 13:18:46 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:57.716 13:18:46 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.716 13:18:46 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.716 13:18:46 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.716 13:18:46 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59386 00:05:57.716 13:18:46 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59386 ']' 00:05:57.716 13:18:46 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59386 00:05:57.716 13:18:46 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:57.716 13:18:46 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.716 13:18:46 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59386 00:05:57.716 killing process with pid 59386 00:05:57.716 13:18:46 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.716 13:18:46 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.716 13:18:46 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59386' 00:05:57.716 13:18:46 app_cmdline -- common/autotest_common.sh@973 -- # kill 59386 00:05:57.716 13:18:46 app_cmdline -- common/autotest_common.sh@978 -- # wait 59386 00:05:59.623 ************************************ 00:05:59.623 END TEST app_cmdline 00:05:59.623 ************************************ 00:05:59.623 00:05:59.623 real 0m3.893s 00:05:59.623 user 0m4.139s 00:05:59.623 sys 0m0.706s 00:05:59.623 13:18:48 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.623 13:18:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:59.623 13:18:48 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:59.623 13:18:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.623 13:18:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.623 13:18:48 -- common/autotest_common.sh@10 -- # set +x 00:05:59.623 ************************************ 00:05:59.623 START TEST version 00:05:59.623 ************************************ 00:05:59.623 13:18:48 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:59.623 * Looking for test storage... 00:05:59.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:59.623 13:18:48 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:59.623 13:18:48 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:59.623 13:18:48 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.883 13:18:48 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.883 13:18:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.883 13:18:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.883 13:18:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.883 13:18:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.883 13:18:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.883 13:18:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.883 13:18:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.883 13:18:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.883 13:18:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.883 13:18:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.883 13:18:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.883 13:18:48 version -- scripts/common.sh@344 -- # case "$op" in 00:05:59.883 13:18:48 version -- scripts/common.sh@345 -- # : 1 00:05:59.883 13:18:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.883 13:18:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.883 13:18:48 version -- scripts/common.sh@365 -- # decimal 1 00:05:59.883 13:18:48 version -- scripts/common.sh@353 -- # local d=1 00:05:59.883 13:18:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.883 13:18:48 version -- scripts/common.sh@355 -- # echo 1 00:05:59.883 13:18:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.883 13:18:48 version -- scripts/common.sh@366 -- # decimal 2 00:05:59.883 13:18:48 version -- scripts/common.sh@353 -- # local d=2 00:05:59.883 13:18:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.883 13:18:48 version -- scripts/common.sh@355 -- # echo 2 00:05:59.883 13:18:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.883 13:18:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.883 13:18:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.883 13:18:48 version -- scripts/common.sh@368 -- # return 0 00:05:59.883 13:18:48 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.883 13:18:48 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.883 --rc genhtml_branch_coverage=1 00:05:59.883 --rc genhtml_function_coverage=1 00:05:59.883 --rc genhtml_legend=1 00:05:59.883 --rc geninfo_all_blocks=1 00:05:59.883 --rc geninfo_unexecuted_blocks=1 00:05:59.883 00:05:59.883 ' 00:05:59.883 13:18:48 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.883 --rc genhtml_branch_coverage=1 00:05:59.883 --rc genhtml_function_coverage=1 00:05:59.883 --rc genhtml_legend=1 00:05:59.883 --rc geninfo_all_blocks=1 00:05:59.883 --rc geninfo_unexecuted_blocks=1 00:05:59.883 00:05:59.883 ' 00:05:59.883 13:18:48 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.883 --rc genhtml_branch_coverage=1 00:05:59.883 --rc genhtml_function_coverage=1 00:05:59.883 --rc genhtml_legend=1 00:05:59.883 --rc geninfo_all_blocks=1 00:05:59.883 --rc geninfo_unexecuted_blocks=1 00:05:59.883 00:05:59.883 ' 00:05:59.883 13:18:48 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.883 --rc genhtml_branch_coverage=1 00:05:59.883 --rc genhtml_function_coverage=1 00:05:59.883 --rc genhtml_legend=1 00:05:59.883 --rc geninfo_all_blocks=1 00:05:59.883 --rc geninfo_unexecuted_blocks=1 00:05:59.883 00:05:59.883 ' 00:05:59.883 13:18:48 version -- app/version.sh@17 -- # get_header_version major 00:05:59.883 13:18:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:59.883 13:18:48 version -- app/version.sh@14 -- # cut -f2 00:05:59.883 13:18:48 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.883 13:18:48 version -- app/version.sh@17 -- # major=25 00:05:59.883 13:18:48 version -- app/version.sh@18 -- # get_header_version minor 00:05:59.883 13:18:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:59.883 13:18:48 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.883 13:18:48 version -- app/version.sh@14 -- # cut -f2 00:05:59.883 13:18:48 version -- app/version.sh@18 -- # minor=1 00:05:59.883 13:18:48 version -- app/version.sh@19 -- # get_header_version patch 00:05:59.883 13:18:48 version -- app/version.sh@14 -- # cut -f2 00:05:59.883 13:18:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:59.883 13:18:48 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.883 13:18:48 version -- app/version.sh@19 -- # patch=0 00:05:59.883 13:18:48 version -- app/version.sh@20 -- # get_header_version suffix 00:05:59.883 13:18:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:59.883 13:18:48 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.883 13:18:48 version -- app/version.sh@14 -- # cut -f2 00:05:59.883 13:18:48 version -- app/version.sh@20 -- # suffix=-pre 00:05:59.883 13:18:48 version -- app/version.sh@22 -- # version=25.1 00:05:59.883 13:18:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:59.883 13:18:48 version -- app/version.sh@28 -- # version=25.1rc0 00:05:59.883 13:18:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:59.883 13:18:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:59.883 13:18:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:59.883 13:18:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:59.883 00:05:59.883 real 0m0.273s 00:05:59.883 user 0m0.167s 00:05:59.883 sys 0m0.140s 00:05:59.883 13:18:48 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.883 13:18:48 version -- common/autotest_common.sh@10 -- # set +x 00:05:59.883 ************************************ 00:05:59.883 END TEST version 00:05:59.883 ************************************ 00:05:59.883 13:18:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:59.883 13:18:48 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:05:59.883 13:18:48 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:05:59.883 13:18:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.883 13:18:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.883 13:18:48 -- common/autotest_common.sh@10 -- # set +x 00:05:59.883 ************************************ 00:05:59.883 START TEST bdev_raid 00:05:59.883 ************************************ 00:05:59.883 13:18:48 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:00.143 * Looking for test storage... 00:06:00.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:00.143 13:18:48 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.143 13:18:48 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.143 13:18:48 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.143 13:18:48 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.143 13:18:48 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:00.143 13:18:48 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.143 13:18:48 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.143 --rc genhtml_branch_coverage=1 00:06:00.143 --rc genhtml_function_coverage=1 00:06:00.143 --rc genhtml_legend=1 00:06:00.143 --rc geninfo_all_blocks=1 00:06:00.143 --rc geninfo_unexecuted_blocks=1 00:06:00.143 00:06:00.143 ' 00:06:00.143 13:18:48 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.143 --rc genhtml_branch_coverage=1 00:06:00.143 --rc genhtml_function_coverage=1 00:06:00.143 --rc genhtml_legend=1 00:06:00.143 --rc geninfo_all_blocks=1 00:06:00.143 --rc geninfo_unexecuted_blocks=1 00:06:00.143 00:06:00.143 ' 00:06:00.144 13:18:48 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.144 --rc genhtml_branch_coverage=1 00:06:00.144 --rc genhtml_function_coverage=1 00:06:00.144 --rc genhtml_legend=1 00:06:00.144 --rc geninfo_all_blocks=1 00:06:00.144 --rc geninfo_unexecuted_blocks=1 00:06:00.144 00:06:00.144 ' 00:06:00.144 13:18:48 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.144 --rc genhtml_branch_coverage=1 00:06:00.144 --rc genhtml_function_coverage=1 00:06:00.144 --rc genhtml_legend=1 00:06:00.144 --rc geninfo_all_blocks=1 00:06:00.144 --rc geninfo_unexecuted_blocks=1 00:06:00.144 00:06:00.144 ' 00:06:00.144 13:18:48 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:00.144 13:18:48 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:00.144 13:18:48 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:00.144 13:18:48 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:00.144 13:18:48 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:00.144 13:18:48 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:00.144 13:18:48 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:00.144 13:18:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.144 13:18:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.144 13:18:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:00.144 ************************************ 00:06:00.144 START TEST raid1_resize_data_offset_test 00:06:00.144 ************************************ 00:06:00.144 13:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:00.144 13:18:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59568 00:06:00.144 13:18:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59568' 00:06:00.144 13:18:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:00.144 Process raid pid: 59568 00:06:00.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.144 13:18:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59568 00:06:00.144 13:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59568 ']' 00:06:00.144 13:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.144 13:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.144 13:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.144 13:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.144 13:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.403 [2024-11-26 13:18:48.711676] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:00.403 [2024-11-26 13:18:48.712137] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.403 [2024-11-26 13:18:48.894768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.662 [2024-11-26 13:18:49.011415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.662 [2024-11-26 13:18:49.201708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:00.662 [2024-11-26 13:18:49.201756] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:01.230 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.230 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:01.230 13:18:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:01.230 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.230 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.230 malloc0 00:06:01.230 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.230 13:18:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:01.230 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.230 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.489 malloc1 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.489 null0 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.489 [2024-11-26 13:18:49.894589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:01.489 [2024-11-26 13:18:49.896943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:01.489 [2024-11-26 13:18:49.897132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:01.489 [2024-11-26 13:18:49.897430] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:01.489 [2024-11-26 13:18:49.897603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:01.489 [2024-11-26 13:18:49.898007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:01.489 [2024-11-26 13:18:49.898380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:01.489 [2024-11-26 13:18:49.898548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:01.489 [2024-11-26 13:18:49.898945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.489 [2024-11-26 13:18:49.958871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.489 13:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.058 malloc2 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.058 [2024-11-26 13:18:50.474851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:02.058 [2024-11-26 13:18:50.489804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.058 [2024-11-26 13:18:50.492286] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59568 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59568 ']' 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59568 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59568 00:06:02.058 killing process with pid 59568 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59568' 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59568 00:06:02.058 13:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59568 00:06:02.058 [2024-11-26 13:18:50.581837] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:02.058 [2024-11-26 13:18:50.583032] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:02.058 [2024-11-26 13:18:50.583098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:02.058 [2024-11-26 13:18:50.583123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:02.058 [2024-11-26 13:18:50.607948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:02.059 [2024-11-26 13:18:50.608364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:02.059 [2024-11-26 13:18:50.608390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:04.003 [2024-11-26 13:18:52.047953] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:04.586 ************************************ 00:06:04.586 END TEST raid1_resize_data_offset_test 00:06:04.586 ************************************ 00:06:04.586 13:18:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:04.586 00:06:04.586 real 0m4.367s 00:06:04.586 user 0m4.264s 00:06:04.586 sys 0m0.719s 00:06:04.586 13:18:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.586 13:18:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.586 13:18:53 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:04.587 13:18:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:04.587 13:18:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.587 13:18:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:04.587 ************************************ 00:06:04.587 START TEST raid0_resize_superblock_test 00:06:04.587 ************************************ 00:06:04.587 13:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:04.587 13:18:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:04.587 Process raid pid: 59652 00:06:04.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.587 13:18:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59652 00:06:04.587 13:18:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59652' 00:06:04.587 13:18:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59652 00:06:04.587 13:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59652 ']' 00:06:04.587 13:18:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:04.587 13:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.587 13:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.587 13:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.587 13:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.587 13:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.587 [2024-11-26 13:18:53.113357] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:04.587 [2024-11-26 13:18:53.113501] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:04.846 [2024-11-26 13:18:53.281217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.846 [2024-11-26 13:18:53.392872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.105 [2024-11-26 13:18:53.585415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:05.105 [2024-11-26 13:18:53.585465] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:05.674 13:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.674 13:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:05.674 13:18:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:05.674 13:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.674 13:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.933 malloc0 00:06:05.933 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.933 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:05.933 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.933 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.192 [2024-11-26 13:18:54.495828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:06.192 [2024-11-26 13:18:54.495924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:06.192 [2024-11-26 13:18:54.495956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:06.192 [2024-11-26 13:18:54.495974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:06.192 [2024-11-26 13:18:54.498885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:06.192 [2024-11-26 13:18:54.499241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:06.192 pt0 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.192 f05ae8e8-e53d-4b4a-b85a-f0cff6a024cd 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.192 21dadea2-ef2d-49d1-97a5-177de93ea519 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.192 b215faa8-eb95-4c4e-ba97-9451c5e40dd6 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.192 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.192 [2024-11-26 13:18:54.672913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 21dadea2-ef2d-49d1-97a5-177de93ea519 is claimed 00:06:06.192 [2024-11-26 13:18:54.673041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b215faa8-eb95-4c4e-ba97-9451c5e40dd6 is claimed 00:06:06.192 [2024-11-26 13:18:54.673204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:06.192 [2024-11-26 13:18:54.673227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:06.193 [2024-11-26 13:18:54.673528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:06.193 [2024-11-26 13:18:54.673771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:06.193 [2024-11-26 13:18:54.673787] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:06.193 [2024-11-26 13:18:54.673949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:06.193 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.193 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:06.193 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:06.193 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.193 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.193 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.193 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:06.193 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:06.193 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.193 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:06.193 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:06.452 [2024-11-26 13:18:54.801109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.452 [2024-11-26 13:18:54.853077] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:06.452 [2024-11-26 13:18:54.853210] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '21dadea2-ef2d-49d1-97a5-177de93ea519' was resized: old size 131072, new size 204800 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.452 [2024-11-26 13:18:54.861027] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:06.452 [2024-11-26 13:18:54.861053] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b215faa8-eb95-4c4e-ba97-9451c5e40dd6' was resized: old size 131072, new size 204800 00:06:06.452 [2024-11-26 13:18:54.861087] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:06.452 [2024-11-26 13:18:54.977136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:06.452 13:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.712 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:06.712 13:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:06.712 13:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:06.712 13:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:06.712 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.712 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.712 [2024-11-26 13:18:55.028946] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:06.712 [2024-11-26 13:18:55.029019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:06.712 [2024-11-26 13:18:55.029037] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:06.712 [2024-11-26 13:18:55.029057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:06.712 [2024-11-26 13:18:55.029163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:06.712 [2024-11-26 13:18:55.029209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:06.712 [2024-11-26 13:18:55.029227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:06.712 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.712 13:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:06.712 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.712 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.712 [2024-11-26 13:18:55.036888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:06.712 [2024-11-26 13:18:55.036948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:06.712 [2024-11-26 13:18:55.036974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:06.712 [2024-11-26 13:18:55.036989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:06.712 [2024-11-26 13:18:55.039550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:06.712 [2024-11-26 13:18:55.039594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:06.712 [2024-11-26 13:18:55.041440] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 21dadea2-ef2d-49d1-97a5-177de93ea519 00:06:06.712 [2024-11-26 13:18:55.041510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 21dadea2-ef2d-49d1-97a5-177de93ea519 is claimed 00:06:06.712 [2024-11-26 13:18:55.041643] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b215faa8-eb95-4c4e-ba97-9451c5e40dd6 00:06:06.712 [2024-11-26 13:18:55.041676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b215faa8-eb95-4c4e-ba97-9451c5e40dd6 is claimed 00:06:06.712 [2024-11-26 13:18:55.041847] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b215faa8-eb95-4c4e-ba97-9451c5e40dd6 (2) smaller than existing raid bdev Raid (3) 00:06:06.712 [2024-11-26 13:18:55.041882] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 21dadea2-ef2d-49d1-97a5-177de93ea519: File exists 00:06:06.712 [2024-11-26 13:18:55.041926] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:06.712 [2024-11-26 13:18:55.041943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:06.712 pt0 00:06:06.712 [2024-11-26 13:18:55.042212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:06.712 [2024-11-26 13:18:55.042387] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:06.712 [2024-11-26 13:18:55.042407] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:06.712 [2024-11-26 13:18:55.042558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:06.712 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.712 13:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:06.712 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.712 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.712 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.712 13:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:06.712 13:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.713 [2024-11-26 13:18:55.057775] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59652 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59652 ']' 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59652 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59652 00:06:06.713 killing process with pid 59652 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59652' 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59652 00:06:06.713 [2024-11-26 13:18:55.139006] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:06.713 [2024-11-26 13:18:55.139060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:06.713 [2024-11-26 13:18:55.139102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:06.713 [2024-11-26 13:18:55.139114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:06.713 13:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59652 00:06:08.092 [2024-11-26 13:18:56.285525] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:08.660 13:18:57 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:08.660 00:06:08.660 real 0m4.176s 00:06:08.660 user 0m4.350s 00:06:08.660 sys 0m0.666s 00:06:08.660 ************************************ 00:06:08.660 END TEST raid0_resize_superblock_test 00:06:08.660 ************************************ 00:06:08.660 13:18:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.660 13:18:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.920 13:18:57 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:08.920 13:18:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:08.920 13:18:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.920 13:18:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:08.920 ************************************ 00:06:08.920 START TEST raid1_resize_superblock_test 00:06:08.920 ************************************ 00:06:08.920 13:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:08.920 13:18:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:08.920 Process raid pid: 59745 00:06:08.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.920 13:18:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59745 00:06:08.920 13:18:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59745' 00:06:08.920 13:18:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59745 00:06:08.920 13:18:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:08.920 13:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59745 ']' 00:06:08.920 13:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.920 13:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.920 13:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.920 13:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.920 13:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.920 [2024-11-26 13:18:57.373648] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:08.920 [2024-11-26 13:18:57.374167] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:09.180 [2024-11-26 13:18:57.556754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.180 [2024-11-26 13:18:57.667380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.440 [2024-11-26 13:18:57.858953] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:09.440 [2024-11-26 13:18:57.859003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:09.699 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.699 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:09.699 13:18:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:09.699 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.699 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.269 malloc0 00:06:10.269 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.269 13:18:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:10.269 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.269 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.269 [2024-11-26 13:18:58.751690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:10.269 [2024-11-26 13:18:58.751786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:10.269 [2024-11-26 13:18:58.751819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:10.269 [2024-11-26 13:18:58.751839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:10.269 [2024-11-26 13:18:58.754334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:10.269 [2024-11-26 13:18:58.754378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:10.269 pt0 00:06:10.269 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.269 13:18:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:10.269 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.269 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.529 cc7d1174-5828-4f2c-9f6f-df65cdfa2304 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.529 23985edc-b442-4123-b9ed-3f6bffe69c96 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.529 3f038236-9c41-4078-84b0-5fc380d28671 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.529 [2024-11-26 13:18:58.930478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 23985edc-b442-4123-b9ed-3f6bffe69c96 is claimed 00:06:10.529 [2024-11-26 13:18:58.930595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3f038236-9c41-4078-84b0-5fc380d28671 is claimed 00:06:10.529 [2024-11-26 13:18:58.930760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:10.529 [2024-11-26 13:18:58.930784] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:10.529 [2024-11-26 13:18:58.931090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:10.529 [2024-11-26 13:18:58.931365] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:10.529 [2024-11-26 13:18:58.931381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:10.529 [2024-11-26 13:18:58.931544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.529 13:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.529 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.529 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:10.529 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:10.529 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:10.529 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:10.529 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:10.529 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.529 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.529 [2024-11-26 13:18:59.050681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:10.529 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.792 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:10.792 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:10.792 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:10.792 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:10.792 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.792 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.792 [2024-11-26 13:18:59.098641] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:10.792 [2024-11-26 13:18:59.098668] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '23985edc-b442-4123-b9ed-3f6bffe69c96' was resized: old size 131072, new size 204800 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.793 [2024-11-26 13:18:59.106592] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:10.793 [2024-11-26 13:18:59.106618] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3f038236-9c41-4078-84b0-5fc380d28671' was resized: old size 131072, new size 204800 00:06:10.793 [2024-11-26 13:18:59.106650] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.793 [2024-11-26 13:18:59.222704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.793 [2024-11-26 13:18:59.274507] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:10.793 [2024-11-26 13:18:59.274584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:10.793 [2024-11-26 13:18:59.274619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:10.793 [2024-11-26 13:18:59.274767] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:10.793 [2024-11-26 13:18:59.274955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:10.793 [2024-11-26 13:18:59.275039] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:10.793 [2024-11-26 13:18:59.275067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.793 [2024-11-26 13:18:59.282456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:10.793 [2024-11-26 13:18:59.282515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:10.793 [2024-11-26 13:18:59.282542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:10.793 [2024-11-26 13:18:59.282560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:10.793 [2024-11-26 13:18:59.285006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:10.793 [2024-11-26 13:18:59.285049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:10.793 [2024-11-26 13:18:59.286887] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 23985edc-b442-4123-b9ed-3f6bffe69c96 00:06:10.793 [2024-11-26 13:18:59.286965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 23985edc-b442-4123-b9ed-3f6bffe69c96 is claimed 00:06:10.793 [2024-11-26 13:18:59.287083] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3f038236-9c41-4078-84b0-5fc380d28671 00:06:10.793 [2024-11-26 13:18:59.287116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3f038236-9c41-4078-84b0-5fc380d28671 is claimed 00:06:10.793 [2024-11-26 13:18:59.287291] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 3f038236-9c41-4078-84b0-5fc380d28671 (2) smaller than existing raid bdev Raid (3) 00:06:10.793 [2024-11-26 13:18:59.287322] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 23985edc-b442-4123-b9ed-3f6bffe69c96: File exists 00:06:10.793 [2024-11-26 13:18:59.287368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:10.793 [2024-11-26 13:18:59.287385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:10.793 pt0 00:06:10.793 [2024-11-26 13:18:59.287653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:10.793 [2024-11-26 13:18:59.287813] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:10.793 [2024-11-26 13:18:59.287834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:10.793 [2024-11-26 13:18:59.287982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.793 [2024-11-26 13:18:59.303176] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59745 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59745 ']' 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59745 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.793 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59745 00:06:11.052 killing process with pid 59745 00:06:11.052 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.052 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.052 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59745' 00:06:11.052 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59745 00:06:11.052 [2024-11-26 13:18:59.383160] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:11.052 [2024-11-26 13:18:59.383213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:11.052 [2024-11-26 13:18:59.383279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:11.052 [2024-11-26 13:18:59.383294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:11.052 13:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59745 00:06:11.990 [2024-11-26 13:19:00.527660] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:12.932 13:19:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:12.932 00:06:12.932 real 0m4.184s 00:06:12.932 user 0m4.371s 00:06:12.932 sys 0m0.675s 00:06:12.932 13:19:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.932 ************************************ 00:06:12.932 END TEST raid1_resize_superblock_test 00:06:12.932 ************************************ 00:06:12.932 13:19:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.932 13:19:01 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:13.191 13:19:01 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:13.191 13:19:01 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:13.191 13:19:01 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:13.191 13:19:01 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:13.191 13:19:01 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:13.191 13:19:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:13.191 13:19:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.191 13:19:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:13.191 ************************************ 00:06:13.191 START TEST raid_function_test_raid0 00:06:13.191 ************************************ 00:06:13.191 13:19:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:13.191 13:19:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:13.191 13:19:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:13.191 13:19:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:13.191 Process raid pid: 59842 00:06:13.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.191 13:19:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=59842 00:06:13.191 13:19:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59842' 00:06:13.191 13:19:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:13.191 13:19:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 59842 00:06:13.191 13:19:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 59842 ']' 00:06:13.191 13:19:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.191 13:19:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.191 13:19:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.191 13:19:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.191 13:19:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:13.191 [2024-11-26 13:19:01.627902] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:13.191 [2024-11-26 13:19:01.628407] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.451 [2024-11-26 13:19:01.809102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.451 [2024-11-26 13:19:01.921019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.710 [2024-11-26 13:19:02.112783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:13.710 [2024-11-26 13:19:02.113045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:14.279 Base_1 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:14.279 Base_2 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:14.279 [2024-11-26 13:19:02.647300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:14.279 [2024-11-26 13:19:02.649450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:14.279 [2024-11-26 13:19:02.649875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:14.279 [2024-11-26 13:19:02.649903] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:14.279 [2024-11-26 13:19:02.650225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:14.279 [2024-11-26 13:19:02.650416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:14.279 [2024-11-26 13:19:02.650431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:14.279 [2024-11-26 13:19:02.650588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:14.279 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:14.538 [2024-11-26 13:19:02.967373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:14.538 /dev/nbd0 00:06:14.538 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.538 13:19:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.538 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:14.538 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:14.538 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.538 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.538 13:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:14.538 13:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:14.538 13:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.538 13:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.539 13:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.539 1+0 records in 00:06:14.539 1+0 records out 00:06:14.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523713 s, 7.8 MB/s 00:06:14.539 13:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.539 13:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:14.539 13:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.539 13:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.539 13:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:14.539 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.539 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:14.539 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:14.539 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:14.539 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:14.798 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.798 { 00:06:14.798 "nbd_device": "/dev/nbd0", 00:06:14.798 "bdev_name": "raid" 00:06:14.798 } 00:06:14.798 ]' 00:06:14.798 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.798 { 00:06:14.798 "nbd_device": "/dev/nbd0", 00:06:14.798 "bdev_name": "raid" 00:06:14.798 } 00:06:14.798 ]' 00:06:14.798 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:15.058 4096+0 records in 00:06:15.058 4096+0 records out 00:06:15.058 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0246828 s, 85.0 MB/s 00:06:15.058 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:15.317 4096+0 records in 00:06:15.317 4096+0 records out 00:06:15.317 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.27467 s, 7.6 MB/s 00:06:15.317 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:15.318 128+0 records in 00:06:15.318 128+0 records out 00:06:15.318 65536 bytes (66 kB, 64 KiB) copied, 0.000542571 s, 121 MB/s 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:15.318 2035+0 records in 00:06:15.318 2035+0 records out 00:06:15.318 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00733967 s, 142 MB/s 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:15.318 456+0 records in 00:06:15.318 456+0 records out 00:06:15.318 233472 bytes (233 kB, 228 KiB) copied, 0.00354212 s, 65.9 MB/s 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.318 13:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:15.577 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.577 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.577 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.577 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.577 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.577 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.577 [2024-11-26 13:19:04.013460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:15.577 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:15.577 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.577 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:15.577 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:15.577 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 59842 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 59842 ']' 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 59842 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59842 00:06:15.837 killing process with pid 59842 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59842' 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 59842 00:06:15.837 [2024-11-26 13:19:04.303623] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:15.837 [2024-11-26 13:19:04.303714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:15.837 13:19:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 59842 00:06:15.837 [2024-11-26 13:19:04.303767] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:15.837 [2024-11-26 13:19:04.303791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:16.097 [2024-11-26 13:19:04.452371] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:17.036 ************************************ 00:06:17.036 END TEST raid_function_test_raid0 00:06:17.036 ************************************ 00:06:17.036 13:19:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:17.036 00:06:17.036 real 0m3.838s 00:06:17.036 user 0m4.613s 00:06:17.036 sys 0m0.979s 00:06:17.036 13:19:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.036 13:19:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:17.036 13:19:05 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:17.036 13:19:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:17.036 13:19:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.036 13:19:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:17.036 ************************************ 00:06:17.036 START TEST raid_function_test_concat 00:06:17.036 ************************************ 00:06:17.036 13:19:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:17.036 13:19:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:17.036 13:19:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:17.036 13:19:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:17.036 Process raid pid: 59974 00:06:17.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.036 13:19:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=59974 00:06:17.036 13:19:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59974' 00:06:17.036 13:19:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 59974 00:06:17.036 13:19:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:17.036 13:19:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 59974 ']' 00:06:17.036 13:19:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.036 13:19:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.036 13:19:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.036 13:19:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.036 13:19:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:17.036 [2024-11-26 13:19:05.524245] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:17.036 [2024-11-26 13:19:05.524723] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.296 [2024-11-26 13:19:05.705333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.296 [2024-11-26 13:19:05.814320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.556 [2024-11-26 13:19:06.005652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:17.556 [2024-11-26 13:19:06.005988] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:18.126 Base_1 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:18.126 Base_2 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:18.126 [2024-11-26 13:19:06.564666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:18.126 [2024-11-26 13:19:06.566924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:18.126 [2024-11-26 13:19:06.567021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:18.126 [2024-11-26 13:19:06.567041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:18.126 [2024-11-26 13:19:06.567321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:18.126 [2024-11-26 13:19:06.567494] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:18.126 [2024-11-26 13:19:06.567509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:18.126 [2024-11-26 13:19:06.567661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:18.126 13:19:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:18.386 [2024-11-26 13:19:06.864717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:18.386 /dev/nbd0 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.386 1+0 records in 00:06:18.386 1+0 records out 00:06:18.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266391 s, 15.4 MB/s 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:18.386 13:19:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:18.645 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.645 { 00:06:18.645 "nbd_device": "/dev/nbd0", 00:06:18.645 "bdev_name": "raid" 00:06:18.645 } 00:06:18.645 ]' 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.904 { 00:06:18.904 "nbd_device": "/dev/nbd0", 00:06:18.904 "bdev_name": "raid" 00:06:18.904 } 00:06:18.904 ]' 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:18.904 4096+0 records in 00:06:18.904 4096+0 records out 00:06:18.904 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0321055 s, 65.3 MB/s 00:06:18.904 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:19.163 4096+0 records in 00:06:19.163 4096+0 records out 00:06:19.163 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.274293 s, 7.6 MB/s 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:19.163 128+0 records in 00:06:19.163 128+0 records out 00:06:19.163 65536 bytes (66 kB, 64 KiB) copied, 0.000673603 s, 97.3 MB/s 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:19.163 2035+0 records in 00:06:19.163 2035+0 records out 00:06:19.163 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.010955 s, 95.1 MB/s 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:19.163 456+0 records in 00:06:19.163 456+0 records out 00:06:19.163 233472 bytes (233 kB, 228 KiB) copied, 0.00284934 s, 81.9 MB/s 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.163 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:19.422 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.423 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.423 [2024-11-26 13:19:07.974470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:19.423 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.423 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.423 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.423 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.423 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:19.423 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.423 13:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:19.423 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:19.423 13:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 59974 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 59974 ']' 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 59974 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59974 00:06:19.992 killing process with pid 59974 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59974' 00:06:19.992 13:19:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 59974 00:06:19.992 [2024-11-26 13:19:08.359187] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:19.992 [2024-11-26 13:19:08.359278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:19.992 [2024-11-26 13:19:08.359329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 13:19:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 59974 00:06:19.992 ee all in destruct 00:06:19.992 [2024-11-26 13:19:08.359347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:19.992 [2024-11-26 13:19:08.505547] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:20.930 ************************************ 00:06:20.930 END TEST raid_function_test_concat 00:06:20.930 ************************************ 00:06:20.930 13:19:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:20.930 00:06:20.930 real 0m4.000s 00:06:20.930 user 0m4.950s 00:06:20.930 sys 0m0.973s 00:06:20.930 13:19:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.930 13:19:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:20.930 13:19:09 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:20.930 13:19:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:20.930 13:19:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.930 13:19:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:20.930 ************************************ 00:06:20.930 START TEST raid0_resize_test 00:06:20.930 ************************************ 00:06:20.930 13:19:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:20.930 13:19:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:20.930 13:19:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:20.930 13:19:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:20.930 13:19:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:20.930 13:19:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:20.930 13:19:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:20.930 13:19:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:20.930 13:19:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:20.930 Process raid pid: 60100 00:06:20.930 13:19:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60100 00:06:20.930 13:19:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60100' 00:06:20.931 13:19:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60100 00:06:20.931 13:19:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60100 ']' 00:06:20.931 13:19:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:20.931 13:19:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.931 13:19:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.931 13:19:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.931 13:19:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.931 13:19:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.189 [2024-11-26 13:19:09.582366] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:21.189 [2024-11-26 13:19:09.582581] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.447 [2024-11-26 13:19:09.764760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.447 [2024-11-26 13:19:09.876701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.706 [2024-11-26 13:19:10.069310] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:21.706 [2024-11-26 13:19:10.069361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:22.275 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.275 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:22.275 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:22.275 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.275 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.275 Base_1 00:06:22.275 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.275 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:22.275 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.275 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.275 Base_2 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.276 [2024-11-26 13:19:10.559125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:22.276 [2024-11-26 13:19:10.561257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:22.276 [2024-11-26 13:19:10.561328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:22.276 [2024-11-26 13:19:10.561345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:22.276 [2024-11-26 13:19:10.561594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:22.276 [2024-11-26 13:19:10.561746] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:22.276 [2024-11-26 13:19:10.561760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:22.276 [2024-11-26 13:19:10.561901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.276 [2024-11-26 13:19:10.567103] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:22.276 [2024-11-26 13:19:10.567134] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:22.276 true 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.276 [2024-11-26 13:19:10.579268] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.276 [2024-11-26 13:19:10.631091] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:22.276 [2024-11-26 13:19:10.631117] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:22.276 [2024-11-26 13:19:10.631148] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:22.276 true 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.276 [2024-11-26 13:19:10.643275] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60100 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60100 ']' 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60100 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60100 00:06:22.276 killing process with pid 60100 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60100' 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60100 00:06:22.276 [2024-11-26 13:19:10.718651] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:22.276 [2024-11-26 13:19:10.718717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:22.276 [2024-11-26 13:19:10.718763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:22.276 13:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60100 00:06:22.276 [2024-11-26 13:19:10.718775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:22.276 [2024-11-26 13:19:10.730078] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:23.213 ************************************ 00:06:23.213 END TEST raid0_resize_test 00:06:23.213 ************************************ 00:06:23.213 13:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:23.213 00:06:23.213 real 0m2.158s 00:06:23.213 user 0m2.347s 00:06:23.213 sys 0m0.422s 00:06:23.213 13:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.213 13:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.213 13:19:11 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:23.213 13:19:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.213 13:19:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.213 13:19:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:23.213 ************************************ 00:06:23.213 START TEST raid1_resize_test 00:06:23.213 ************************************ 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60157 00:06:23.213 Process raid pid: 60157 00:06:23.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60157' 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60157 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60157 ']' 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.213 13:19:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.473 [2024-11-26 13:19:11.803697] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:23.473 [2024-11-26 13:19:11.804190] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.473 [2024-11-26 13:19:11.983856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.732 [2024-11-26 13:19:12.098186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.732 [2024-11-26 13:19:12.288417] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.732 [2024-11-26 13:19:12.288467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.301 Base_1 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.301 Base_2 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.301 [2024-11-26 13:19:12.715742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:24.301 [2024-11-26 13:19:12.718060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:24.301 [2024-11-26 13:19:12.718133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:24.301 [2024-11-26 13:19:12.718151] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:24.301 [2024-11-26 13:19:12.718423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:24.301 [2024-11-26 13:19:12.718569] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:24.301 [2024-11-26 13:19:12.718584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:24.301 [2024-11-26 13:19:12.718732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.301 [2024-11-26 13:19:12.723742] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:24.301 [2024-11-26 13:19:12.723778] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:24.301 true 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.301 [2024-11-26 13:19:12.735886] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.301 [2024-11-26 13:19:12.787715] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:24.301 [2024-11-26 13:19:12.787740] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:24.301 [2024-11-26 13:19:12.787774] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:24.301 true 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.301 [2024-11-26 13:19:12.799888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60157 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60157 ']' 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60157 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.301 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60157 00:06:24.559 killing process with pid 60157 00:06:24.559 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.559 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.559 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60157' 00:06:24.559 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60157 00:06:24.559 [2024-11-26 13:19:12.880556] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:24.559 [2024-11-26 13:19:12.880622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:24.559 13:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60157 00:06:24.559 [2024-11-26 13:19:12.881037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:24.559 [2024-11-26 13:19:12.881064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:24.559 [2024-11-26 13:19:12.892576] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:25.496 13:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:25.496 00:06:25.496 real 0m2.145s 00:06:25.496 user 0m2.314s 00:06:25.496 sys 0m0.390s 00:06:25.496 13:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.496 13:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.496 ************************************ 00:06:25.496 END TEST raid1_resize_test 00:06:25.496 ************************************ 00:06:25.496 13:19:13 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:25.496 13:19:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:25.496 13:19:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:25.496 13:19:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:25.496 13:19:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.496 13:19:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:25.496 ************************************ 00:06:25.496 START TEST raid_state_function_test 00:06:25.496 ************************************ 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:25.496 Process raid pid: 60214 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60214 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60214' 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60214 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60214 ']' 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.496 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.496 [2024-11-26 13:19:13.974813] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:25.496 [2024-11-26 13:19:13.975186] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:25.755 [2024-11-26 13:19:14.134375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.755 [2024-11-26 13:19:14.241674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.014 [2024-11-26 13:19:14.416999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:26.014 [2024-11-26 13:19:14.417204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.582 [2024-11-26 13:19:14.952759] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:26.582 [2024-11-26 13:19:14.952817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:26.582 [2024-11-26 13:19:14.952832] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:26.582 [2024-11-26 13:19:14.952846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.582 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.582 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:26.582 "name": "Existed_Raid", 00:06:26.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:26.582 "strip_size_kb": 64, 00:06:26.582 "state": "configuring", 00:06:26.582 "raid_level": "raid0", 00:06:26.582 "superblock": false, 00:06:26.582 "num_base_bdevs": 2, 00:06:26.582 "num_base_bdevs_discovered": 0, 00:06:26.582 "num_base_bdevs_operational": 2, 00:06:26.582 "base_bdevs_list": [ 00:06:26.582 { 00:06:26.582 "name": "BaseBdev1", 00:06:26.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:26.582 "is_configured": false, 00:06:26.582 "data_offset": 0, 00:06:26.582 "data_size": 0 00:06:26.582 }, 00:06:26.582 { 00:06:26.582 "name": "BaseBdev2", 00:06:26.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:26.582 "is_configured": false, 00:06:26.582 "data_offset": 0, 00:06:26.582 "data_size": 0 00:06:26.582 } 00:06:26.582 ] 00:06:26.582 }' 00:06:26.582 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:26.582 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.151 [2024-11-26 13:19:15.480891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:27.151 [2024-11-26 13:19:15.480922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.151 [2024-11-26 13:19:15.488891] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:27.151 [2024-11-26 13:19:15.488935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:27.151 [2024-11-26 13:19:15.488948] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:27.151 [2024-11-26 13:19:15.488964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.151 [2024-11-26 13:19:15.527899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:27.151 BaseBdev1 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:27.151 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.152 [ 00:06:27.152 { 00:06:27.152 "name": "BaseBdev1", 00:06:27.152 "aliases": [ 00:06:27.152 "366be8a1-2bf3-4971-a687-d0222173c69c" 00:06:27.152 ], 00:06:27.152 "product_name": "Malloc disk", 00:06:27.152 "block_size": 512, 00:06:27.152 "num_blocks": 65536, 00:06:27.152 "uuid": "366be8a1-2bf3-4971-a687-d0222173c69c", 00:06:27.152 "assigned_rate_limits": { 00:06:27.152 "rw_ios_per_sec": 0, 00:06:27.152 "rw_mbytes_per_sec": 0, 00:06:27.152 "r_mbytes_per_sec": 0, 00:06:27.152 "w_mbytes_per_sec": 0 00:06:27.152 }, 00:06:27.152 "claimed": true, 00:06:27.152 "claim_type": "exclusive_write", 00:06:27.152 "zoned": false, 00:06:27.152 "supported_io_types": { 00:06:27.152 "read": true, 00:06:27.152 "write": true, 00:06:27.152 "unmap": true, 00:06:27.152 "flush": true, 00:06:27.152 "reset": true, 00:06:27.152 "nvme_admin": false, 00:06:27.152 "nvme_io": false, 00:06:27.152 "nvme_io_md": false, 00:06:27.152 "write_zeroes": true, 00:06:27.152 "zcopy": true, 00:06:27.152 "get_zone_info": false, 00:06:27.152 "zone_management": false, 00:06:27.152 "zone_append": false, 00:06:27.152 "compare": false, 00:06:27.152 "compare_and_write": false, 00:06:27.152 "abort": true, 00:06:27.152 "seek_hole": false, 00:06:27.152 "seek_data": false, 00:06:27.152 "copy": true, 00:06:27.152 "nvme_iov_md": false 00:06:27.152 }, 00:06:27.152 "memory_domains": [ 00:06:27.152 { 00:06:27.152 "dma_device_id": "system", 00:06:27.152 "dma_device_type": 1 00:06:27.152 }, 00:06:27.152 { 00:06:27.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.152 "dma_device_type": 2 00:06:27.152 } 00:06:27.152 ], 00:06:27.152 "driver_specific": {} 00:06:27.152 } 00:06:27.152 ] 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:27.152 "name": "Existed_Raid", 00:06:27.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:27.152 "strip_size_kb": 64, 00:06:27.152 "state": "configuring", 00:06:27.152 "raid_level": "raid0", 00:06:27.152 "superblock": false, 00:06:27.152 "num_base_bdevs": 2, 00:06:27.152 "num_base_bdevs_discovered": 1, 00:06:27.152 "num_base_bdevs_operational": 2, 00:06:27.152 "base_bdevs_list": [ 00:06:27.152 { 00:06:27.152 "name": "BaseBdev1", 00:06:27.152 "uuid": "366be8a1-2bf3-4971-a687-d0222173c69c", 00:06:27.152 "is_configured": true, 00:06:27.152 "data_offset": 0, 00:06:27.152 "data_size": 65536 00:06:27.152 }, 00:06:27.152 { 00:06:27.152 "name": "BaseBdev2", 00:06:27.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:27.152 "is_configured": false, 00:06:27.152 "data_offset": 0, 00:06:27.152 "data_size": 0 00:06:27.152 } 00:06:27.152 ] 00:06:27.152 }' 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:27.152 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.720 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:27.720 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.720 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.720 [2024-11-26 13:19:16.064044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:27.720 [2024-11-26 13:19:16.064220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:27.720 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.720 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:27.720 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.720 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.720 [2024-11-26 13:19:16.076085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:27.720 [2024-11-26 13:19:16.078386] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:27.720 [2024-11-26 13:19:16.078610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:27.720 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.720 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:27.720 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:27.721 "name": "Existed_Raid", 00:06:27.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:27.721 "strip_size_kb": 64, 00:06:27.721 "state": "configuring", 00:06:27.721 "raid_level": "raid0", 00:06:27.721 "superblock": false, 00:06:27.721 "num_base_bdevs": 2, 00:06:27.721 "num_base_bdevs_discovered": 1, 00:06:27.721 "num_base_bdevs_operational": 2, 00:06:27.721 "base_bdevs_list": [ 00:06:27.721 { 00:06:27.721 "name": "BaseBdev1", 00:06:27.721 "uuid": "366be8a1-2bf3-4971-a687-d0222173c69c", 00:06:27.721 "is_configured": true, 00:06:27.721 "data_offset": 0, 00:06:27.721 "data_size": 65536 00:06:27.721 }, 00:06:27.721 { 00:06:27.721 "name": "BaseBdev2", 00:06:27.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:27.721 "is_configured": false, 00:06:27.721 "data_offset": 0, 00:06:27.721 "data_size": 0 00:06:27.721 } 00:06:27.721 ] 00:06:27.721 }' 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:27.721 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.302 [2024-11-26 13:19:16.625828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:28.302 [2024-11-26 13:19:16.626051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:28.302 [2024-11-26 13:19:16.626075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:28.302 [2024-11-26 13:19:16.626435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:28.302 [2024-11-26 13:19:16.626657] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:28.302 [2024-11-26 13:19:16.626678] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:28.302 [2024-11-26 13:19:16.626936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:28.302 BaseBdev2 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.302 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.302 [ 00:06:28.302 { 00:06:28.302 "name": "BaseBdev2", 00:06:28.302 "aliases": [ 00:06:28.302 "83950191-9e9f-4bee-8c30-c9e40ea33276" 00:06:28.302 ], 00:06:28.302 "product_name": "Malloc disk", 00:06:28.302 "block_size": 512, 00:06:28.302 "num_blocks": 65536, 00:06:28.302 "uuid": "83950191-9e9f-4bee-8c30-c9e40ea33276", 00:06:28.302 "assigned_rate_limits": { 00:06:28.302 "rw_ios_per_sec": 0, 00:06:28.302 "rw_mbytes_per_sec": 0, 00:06:28.302 "r_mbytes_per_sec": 0, 00:06:28.302 "w_mbytes_per_sec": 0 00:06:28.302 }, 00:06:28.302 "claimed": true, 00:06:28.302 "claim_type": "exclusive_write", 00:06:28.302 "zoned": false, 00:06:28.302 "supported_io_types": { 00:06:28.302 "read": true, 00:06:28.302 "write": true, 00:06:28.302 "unmap": true, 00:06:28.302 "flush": true, 00:06:28.302 "reset": true, 00:06:28.302 "nvme_admin": false, 00:06:28.302 "nvme_io": false, 00:06:28.302 "nvme_io_md": false, 00:06:28.302 "write_zeroes": true, 00:06:28.302 "zcopy": true, 00:06:28.302 "get_zone_info": false, 00:06:28.302 "zone_management": false, 00:06:28.302 "zone_append": false, 00:06:28.302 "compare": false, 00:06:28.302 "compare_and_write": false, 00:06:28.302 "abort": true, 00:06:28.302 "seek_hole": false, 00:06:28.302 "seek_data": false, 00:06:28.302 "copy": true, 00:06:28.302 "nvme_iov_md": false 00:06:28.302 }, 00:06:28.302 "memory_domains": [ 00:06:28.302 { 00:06:28.302 "dma_device_id": "system", 00:06:28.302 "dma_device_type": 1 00:06:28.302 }, 00:06:28.302 { 00:06:28.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.302 "dma_device_type": 2 00:06:28.302 } 00:06:28.302 ], 00:06:28.302 "driver_specific": {} 00:06:28.303 } 00:06:28.303 ] 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:28.303 "name": "Existed_Raid", 00:06:28.303 "uuid": "45efa31d-30e0-48b0-b304-558863c29164", 00:06:28.303 "strip_size_kb": 64, 00:06:28.303 "state": "online", 00:06:28.303 "raid_level": "raid0", 00:06:28.303 "superblock": false, 00:06:28.303 "num_base_bdevs": 2, 00:06:28.303 "num_base_bdevs_discovered": 2, 00:06:28.303 "num_base_bdevs_operational": 2, 00:06:28.303 "base_bdevs_list": [ 00:06:28.303 { 00:06:28.303 "name": "BaseBdev1", 00:06:28.303 "uuid": "366be8a1-2bf3-4971-a687-d0222173c69c", 00:06:28.303 "is_configured": true, 00:06:28.303 "data_offset": 0, 00:06:28.303 "data_size": 65536 00:06:28.303 }, 00:06:28.303 { 00:06:28.303 "name": "BaseBdev2", 00:06:28.303 "uuid": "83950191-9e9f-4bee-8c30-c9e40ea33276", 00:06:28.303 "is_configured": true, 00:06:28.303 "data_offset": 0, 00:06:28.303 "data_size": 65536 00:06:28.303 } 00:06:28.303 ] 00:06:28.303 }' 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:28.303 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.887 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:28.887 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:28.887 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:28.887 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:28.887 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:28.887 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:28.887 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:28.887 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.887 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.887 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:28.887 [2024-11-26 13:19:17.178335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:28.887 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.887 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:28.887 "name": "Existed_Raid", 00:06:28.887 "aliases": [ 00:06:28.887 "45efa31d-30e0-48b0-b304-558863c29164" 00:06:28.887 ], 00:06:28.887 "product_name": "Raid Volume", 00:06:28.887 "block_size": 512, 00:06:28.887 "num_blocks": 131072, 00:06:28.887 "uuid": "45efa31d-30e0-48b0-b304-558863c29164", 00:06:28.887 "assigned_rate_limits": { 00:06:28.887 "rw_ios_per_sec": 0, 00:06:28.887 "rw_mbytes_per_sec": 0, 00:06:28.887 "r_mbytes_per_sec": 0, 00:06:28.887 "w_mbytes_per_sec": 0 00:06:28.887 }, 00:06:28.887 "claimed": false, 00:06:28.887 "zoned": false, 00:06:28.887 "supported_io_types": { 00:06:28.887 "read": true, 00:06:28.887 "write": true, 00:06:28.887 "unmap": true, 00:06:28.887 "flush": true, 00:06:28.887 "reset": true, 00:06:28.887 "nvme_admin": false, 00:06:28.887 "nvme_io": false, 00:06:28.887 "nvme_io_md": false, 00:06:28.887 "write_zeroes": true, 00:06:28.887 "zcopy": false, 00:06:28.887 "get_zone_info": false, 00:06:28.887 "zone_management": false, 00:06:28.887 "zone_append": false, 00:06:28.887 "compare": false, 00:06:28.887 "compare_and_write": false, 00:06:28.887 "abort": false, 00:06:28.887 "seek_hole": false, 00:06:28.887 "seek_data": false, 00:06:28.887 "copy": false, 00:06:28.887 "nvme_iov_md": false 00:06:28.887 }, 00:06:28.887 "memory_domains": [ 00:06:28.887 { 00:06:28.887 "dma_device_id": "system", 00:06:28.887 "dma_device_type": 1 00:06:28.887 }, 00:06:28.887 { 00:06:28.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.887 "dma_device_type": 2 00:06:28.887 }, 00:06:28.887 { 00:06:28.887 "dma_device_id": "system", 00:06:28.887 "dma_device_type": 1 00:06:28.887 }, 00:06:28.887 { 00:06:28.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.887 "dma_device_type": 2 00:06:28.888 } 00:06:28.888 ], 00:06:28.888 "driver_specific": { 00:06:28.888 "raid": { 00:06:28.888 "uuid": "45efa31d-30e0-48b0-b304-558863c29164", 00:06:28.888 "strip_size_kb": 64, 00:06:28.888 "state": "online", 00:06:28.888 "raid_level": "raid0", 00:06:28.888 "superblock": false, 00:06:28.888 "num_base_bdevs": 2, 00:06:28.888 "num_base_bdevs_discovered": 2, 00:06:28.888 "num_base_bdevs_operational": 2, 00:06:28.888 "base_bdevs_list": [ 00:06:28.888 { 00:06:28.888 "name": "BaseBdev1", 00:06:28.888 "uuid": "366be8a1-2bf3-4971-a687-d0222173c69c", 00:06:28.888 "is_configured": true, 00:06:28.888 "data_offset": 0, 00:06:28.888 "data_size": 65536 00:06:28.888 }, 00:06:28.888 { 00:06:28.888 "name": "BaseBdev2", 00:06:28.888 "uuid": "83950191-9e9f-4bee-8c30-c9e40ea33276", 00:06:28.888 "is_configured": true, 00:06:28.888 "data_offset": 0, 00:06:28.888 "data_size": 65536 00:06:28.888 } 00:06:28.888 ] 00:06:28.888 } 00:06:28.888 } 00:06:28.888 }' 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:28.888 BaseBdev2' 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.888 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.888 [2024-11-26 13:19:17.442125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:28.888 [2024-11-26 13:19:17.442159] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:28.888 [2024-11-26 13:19:17.442206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.147 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:29.147 "name": "Existed_Raid", 00:06:29.147 "uuid": "45efa31d-30e0-48b0-b304-558863c29164", 00:06:29.147 "strip_size_kb": 64, 00:06:29.147 "state": "offline", 00:06:29.147 "raid_level": "raid0", 00:06:29.147 "superblock": false, 00:06:29.147 "num_base_bdevs": 2, 00:06:29.147 "num_base_bdevs_discovered": 1, 00:06:29.147 "num_base_bdevs_operational": 1, 00:06:29.147 "base_bdevs_list": [ 00:06:29.147 { 00:06:29.147 "name": null, 00:06:29.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:29.147 "is_configured": false, 00:06:29.147 "data_offset": 0, 00:06:29.147 "data_size": 65536 00:06:29.147 }, 00:06:29.147 { 00:06:29.147 "name": "BaseBdev2", 00:06:29.147 "uuid": "83950191-9e9f-4bee-8c30-c9e40ea33276", 00:06:29.147 "is_configured": true, 00:06:29.147 "data_offset": 0, 00:06:29.147 "data_size": 65536 00:06:29.148 } 00:06:29.148 ] 00:06:29.148 }' 00:06:29.148 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:29.148 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.716 [2024-11-26 13:19:18.086629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:29.716 [2024-11-26 13:19:18.086688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60214 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60214 ']' 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60214 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60214 00:06:29.716 killing process with pid 60214 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60214' 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60214 00:06:29.716 [2024-11-26 13:19:18.244489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:29.716 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60214 00:06:29.716 [2024-11-26 13:19:18.263653] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:30.654 00:06:30.654 real 0m5.255s 00:06:30.654 user 0m8.068s 00:06:30.654 sys 0m0.729s 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.654 ************************************ 00:06:30.654 END TEST raid_state_function_test 00:06:30.654 ************************************ 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.654 13:19:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:30.654 13:19:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:30.654 13:19:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.654 13:19:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:30.654 ************************************ 00:06:30.654 START TEST raid_state_function_test_sb 00:06:30.654 ************************************ 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:30.654 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:30.655 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:30.655 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:30.655 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:30.655 Process raid pid: 60467 00:06:30.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.655 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60467 00:06:30.655 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:30.655 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60467' 00:06:30.655 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60467 00:06:30.655 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60467 ']' 00:06:30.655 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.655 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.655 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.655 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.655 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:30.913 [2024-11-26 13:19:19.360355] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:30.913 [2024-11-26 13:19:19.360935] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.171 [2024-11-26 13:19:19.554109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.171 [2024-11-26 13:19:19.656922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.430 [2024-11-26 13:19:19.828650] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.430 [2024-11-26 13:19:19.828690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:31.997 [2024-11-26 13:19:20.282854] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:31.997 [2024-11-26 13:19:20.282923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:31.997 [2024-11-26 13:19:20.282939] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:31.997 [2024-11-26 13:19:20.282952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.997 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:31.997 "name": "Existed_Raid", 00:06:31.997 "uuid": "b8af897f-9be5-4487-898d-ec2908da80ff", 00:06:31.997 "strip_size_kb": 64, 00:06:31.997 "state": "configuring", 00:06:31.997 "raid_level": "raid0", 00:06:31.997 "superblock": true, 00:06:31.997 "num_base_bdevs": 2, 00:06:31.997 "num_base_bdevs_discovered": 0, 00:06:31.997 "num_base_bdevs_operational": 2, 00:06:31.997 "base_bdevs_list": [ 00:06:31.997 { 00:06:31.997 "name": "BaseBdev1", 00:06:31.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:31.997 "is_configured": false, 00:06:31.998 "data_offset": 0, 00:06:31.998 "data_size": 0 00:06:31.998 }, 00:06:31.998 { 00:06:31.998 "name": "BaseBdev2", 00:06:31.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:31.998 "is_configured": false, 00:06:31.998 "data_offset": 0, 00:06:31.998 "data_size": 0 00:06:31.998 } 00:06:31.998 ] 00:06:31.998 }' 00:06:31.998 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:31.998 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:32.257 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:32.257 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.257 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:32.257 [2024-11-26 13:19:20.810933] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:32.257 [2024-11-26 13:19:20.810966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:32.257 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.257 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:32.257 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.257 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:32.257 [2024-11-26 13:19:20.818939] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:32.257 [2024-11-26 13:19:20.818997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:32.257 [2024-11-26 13:19:20.819010] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:32.257 [2024-11-26 13:19:20.819042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:32.517 [2024-11-26 13:19:20.857661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:32.517 BaseBdev1 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.517 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:32.517 [ 00:06:32.517 { 00:06:32.517 "name": "BaseBdev1", 00:06:32.517 "aliases": [ 00:06:32.517 "4b4a96f1-f5ca-42eb-953a-0c14d9ff736b" 00:06:32.517 ], 00:06:32.517 "product_name": "Malloc disk", 00:06:32.517 "block_size": 512, 00:06:32.517 "num_blocks": 65536, 00:06:32.517 "uuid": "4b4a96f1-f5ca-42eb-953a-0c14d9ff736b", 00:06:32.517 "assigned_rate_limits": { 00:06:32.517 "rw_ios_per_sec": 0, 00:06:32.517 "rw_mbytes_per_sec": 0, 00:06:32.517 "r_mbytes_per_sec": 0, 00:06:32.517 "w_mbytes_per_sec": 0 00:06:32.517 }, 00:06:32.517 "claimed": true, 00:06:32.517 "claim_type": "exclusive_write", 00:06:32.517 "zoned": false, 00:06:32.517 "supported_io_types": { 00:06:32.517 "read": true, 00:06:32.517 "write": true, 00:06:32.517 "unmap": true, 00:06:32.517 "flush": true, 00:06:32.517 "reset": true, 00:06:32.517 "nvme_admin": false, 00:06:32.517 "nvme_io": false, 00:06:32.517 "nvme_io_md": false, 00:06:32.517 "write_zeroes": true, 00:06:32.517 "zcopy": true, 00:06:32.517 "get_zone_info": false, 00:06:32.517 "zone_management": false, 00:06:32.517 "zone_append": false, 00:06:32.517 "compare": false, 00:06:32.517 "compare_and_write": false, 00:06:32.517 "abort": true, 00:06:32.517 "seek_hole": false, 00:06:32.517 "seek_data": false, 00:06:32.517 "copy": true, 00:06:32.517 "nvme_iov_md": false 00:06:32.517 }, 00:06:32.517 "memory_domains": [ 00:06:32.518 { 00:06:32.518 "dma_device_id": "system", 00:06:32.518 "dma_device_type": 1 00:06:32.518 }, 00:06:32.518 { 00:06:32.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.518 "dma_device_type": 2 00:06:32.518 } 00:06:32.518 ], 00:06:32.518 "driver_specific": {} 00:06:32.518 } 00:06:32.518 ] 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:32.518 "name": "Existed_Raid", 00:06:32.518 "uuid": "c24b9596-80d7-4099-8ba5-10191b50bd42", 00:06:32.518 "strip_size_kb": 64, 00:06:32.518 "state": "configuring", 00:06:32.518 "raid_level": "raid0", 00:06:32.518 "superblock": true, 00:06:32.518 "num_base_bdevs": 2, 00:06:32.518 "num_base_bdevs_discovered": 1, 00:06:32.518 "num_base_bdevs_operational": 2, 00:06:32.518 "base_bdevs_list": [ 00:06:32.518 { 00:06:32.518 "name": "BaseBdev1", 00:06:32.518 "uuid": "4b4a96f1-f5ca-42eb-953a-0c14d9ff736b", 00:06:32.518 "is_configured": true, 00:06:32.518 "data_offset": 2048, 00:06:32.518 "data_size": 63488 00:06:32.518 }, 00:06:32.518 { 00:06:32.518 "name": "BaseBdev2", 00:06:32.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:32.518 "is_configured": false, 00:06:32.518 "data_offset": 0, 00:06:32.518 "data_size": 0 00:06:32.518 } 00:06:32.518 ] 00:06:32.518 }' 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:32.518 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:33.086 [2024-11-26 13:19:21.413889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:33.086 [2024-11-26 13:19:21.413930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:33.086 [2024-11-26 13:19:21.421963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:33.086 [2024-11-26 13:19:21.424222] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:33.086 [2024-11-26 13:19:21.424472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.086 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:33.086 "name": "Existed_Raid", 00:06:33.086 "uuid": "a88eb210-52c1-4583-b71a-03121ab2b8dc", 00:06:33.086 "strip_size_kb": 64, 00:06:33.086 "state": "configuring", 00:06:33.086 "raid_level": "raid0", 00:06:33.086 "superblock": true, 00:06:33.086 "num_base_bdevs": 2, 00:06:33.086 "num_base_bdevs_discovered": 1, 00:06:33.086 "num_base_bdevs_operational": 2, 00:06:33.086 "base_bdevs_list": [ 00:06:33.086 { 00:06:33.086 "name": "BaseBdev1", 00:06:33.086 "uuid": "4b4a96f1-f5ca-42eb-953a-0c14d9ff736b", 00:06:33.086 "is_configured": true, 00:06:33.087 "data_offset": 2048, 00:06:33.087 "data_size": 63488 00:06:33.087 }, 00:06:33.087 { 00:06:33.087 "name": "BaseBdev2", 00:06:33.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.087 "is_configured": false, 00:06:33.087 "data_offset": 0, 00:06:33.087 "data_size": 0 00:06:33.087 } 00:06:33.087 ] 00:06:33.087 }' 00:06:33.087 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:33.087 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:33.656 [2024-11-26 13:19:21.968496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:33.656 [2024-11-26 13:19:21.968787] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:33.656 [2024-11-26 13:19:21.968805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:33.656 BaseBdev2 00:06:33.656 [2024-11-26 13:19:21.969092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:33.656 [2024-11-26 13:19:21.969309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:33.656 [2024-11-26 13:19:21.969345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:33.656 [2024-11-26 13:19:21.969506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.656 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:33.656 [ 00:06:33.656 { 00:06:33.656 "name": "BaseBdev2", 00:06:33.656 "aliases": [ 00:06:33.656 "121c434e-e792-434b-b05d-b80994a52f57" 00:06:33.656 ], 00:06:33.656 "product_name": "Malloc disk", 00:06:33.656 "block_size": 512, 00:06:33.656 "num_blocks": 65536, 00:06:33.656 "uuid": "121c434e-e792-434b-b05d-b80994a52f57", 00:06:33.656 "assigned_rate_limits": { 00:06:33.656 "rw_ios_per_sec": 0, 00:06:33.656 "rw_mbytes_per_sec": 0, 00:06:33.656 "r_mbytes_per_sec": 0, 00:06:33.656 "w_mbytes_per_sec": 0 00:06:33.656 }, 00:06:33.656 "claimed": true, 00:06:33.656 "claim_type": "exclusive_write", 00:06:33.656 "zoned": false, 00:06:33.656 "supported_io_types": { 00:06:33.656 "read": true, 00:06:33.656 "write": true, 00:06:33.656 "unmap": true, 00:06:33.656 "flush": true, 00:06:33.656 "reset": true, 00:06:33.656 "nvme_admin": false, 00:06:33.656 "nvme_io": false, 00:06:33.656 "nvme_io_md": false, 00:06:33.656 "write_zeroes": true, 00:06:33.656 "zcopy": true, 00:06:33.656 "get_zone_info": false, 00:06:33.656 "zone_management": false, 00:06:33.656 "zone_append": false, 00:06:33.656 "compare": false, 00:06:33.656 "compare_and_write": false, 00:06:33.656 "abort": true, 00:06:33.656 "seek_hole": false, 00:06:33.656 "seek_data": false, 00:06:33.656 "copy": true, 00:06:33.656 "nvme_iov_md": false 00:06:33.656 }, 00:06:33.656 "memory_domains": [ 00:06:33.656 { 00:06:33.656 "dma_device_id": "system", 00:06:33.656 "dma_device_type": 1 00:06:33.656 }, 00:06:33.656 { 00:06:33.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.656 "dma_device_type": 2 00:06:33.656 } 00:06:33.656 ], 00:06:33.656 "driver_specific": {} 00:06:33.656 } 00:06:33.656 ] 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.656 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:33.656 "name": "Existed_Raid", 00:06:33.656 "uuid": "a88eb210-52c1-4583-b71a-03121ab2b8dc", 00:06:33.656 "strip_size_kb": 64, 00:06:33.656 "state": "online", 00:06:33.656 "raid_level": "raid0", 00:06:33.656 "superblock": true, 00:06:33.656 "num_base_bdevs": 2, 00:06:33.656 "num_base_bdevs_discovered": 2, 00:06:33.656 "num_base_bdevs_operational": 2, 00:06:33.657 "base_bdevs_list": [ 00:06:33.657 { 00:06:33.657 "name": "BaseBdev1", 00:06:33.657 "uuid": "4b4a96f1-f5ca-42eb-953a-0c14d9ff736b", 00:06:33.657 "is_configured": true, 00:06:33.657 "data_offset": 2048, 00:06:33.657 "data_size": 63488 00:06:33.657 }, 00:06:33.657 { 00:06:33.657 "name": "BaseBdev2", 00:06:33.657 "uuid": "121c434e-e792-434b-b05d-b80994a52f57", 00:06:33.657 "is_configured": true, 00:06:33.657 "data_offset": 2048, 00:06:33.657 "data_size": 63488 00:06:33.657 } 00:06:33.657 ] 00:06:33.657 }' 00:06:33.657 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:33.657 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.225 [2024-11-26 13:19:22.508992] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:34.225 "name": "Existed_Raid", 00:06:34.225 "aliases": [ 00:06:34.225 "a88eb210-52c1-4583-b71a-03121ab2b8dc" 00:06:34.225 ], 00:06:34.225 "product_name": "Raid Volume", 00:06:34.225 "block_size": 512, 00:06:34.225 "num_blocks": 126976, 00:06:34.225 "uuid": "a88eb210-52c1-4583-b71a-03121ab2b8dc", 00:06:34.225 "assigned_rate_limits": { 00:06:34.225 "rw_ios_per_sec": 0, 00:06:34.225 "rw_mbytes_per_sec": 0, 00:06:34.225 "r_mbytes_per_sec": 0, 00:06:34.225 "w_mbytes_per_sec": 0 00:06:34.225 }, 00:06:34.225 "claimed": false, 00:06:34.225 "zoned": false, 00:06:34.225 "supported_io_types": { 00:06:34.225 "read": true, 00:06:34.225 "write": true, 00:06:34.225 "unmap": true, 00:06:34.225 "flush": true, 00:06:34.225 "reset": true, 00:06:34.225 "nvme_admin": false, 00:06:34.225 "nvme_io": false, 00:06:34.225 "nvme_io_md": false, 00:06:34.225 "write_zeroes": true, 00:06:34.225 "zcopy": false, 00:06:34.225 "get_zone_info": false, 00:06:34.225 "zone_management": false, 00:06:34.225 "zone_append": false, 00:06:34.225 "compare": false, 00:06:34.225 "compare_and_write": false, 00:06:34.225 "abort": false, 00:06:34.225 "seek_hole": false, 00:06:34.225 "seek_data": false, 00:06:34.225 "copy": false, 00:06:34.225 "nvme_iov_md": false 00:06:34.225 }, 00:06:34.225 "memory_domains": [ 00:06:34.225 { 00:06:34.225 "dma_device_id": "system", 00:06:34.225 "dma_device_type": 1 00:06:34.225 }, 00:06:34.225 { 00:06:34.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.225 "dma_device_type": 2 00:06:34.225 }, 00:06:34.225 { 00:06:34.225 "dma_device_id": "system", 00:06:34.225 "dma_device_type": 1 00:06:34.225 }, 00:06:34.225 { 00:06:34.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.225 "dma_device_type": 2 00:06:34.225 } 00:06:34.225 ], 00:06:34.225 "driver_specific": { 00:06:34.225 "raid": { 00:06:34.225 "uuid": "a88eb210-52c1-4583-b71a-03121ab2b8dc", 00:06:34.225 "strip_size_kb": 64, 00:06:34.225 "state": "online", 00:06:34.225 "raid_level": "raid0", 00:06:34.225 "superblock": true, 00:06:34.225 "num_base_bdevs": 2, 00:06:34.225 "num_base_bdevs_discovered": 2, 00:06:34.225 "num_base_bdevs_operational": 2, 00:06:34.225 "base_bdevs_list": [ 00:06:34.225 { 00:06:34.225 "name": "BaseBdev1", 00:06:34.225 "uuid": "4b4a96f1-f5ca-42eb-953a-0c14d9ff736b", 00:06:34.225 "is_configured": true, 00:06:34.225 "data_offset": 2048, 00:06:34.225 "data_size": 63488 00:06:34.225 }, 00:06:34.225 { 00:06:34.225 "name": "BaseBdev2", 00:06:34.225 "uuid": "121c434e-e792-434b-b05d-b80994a52f57", 00:06:34.225 "is_configured": true, 00:06:34.225 "data_offset": 2048, 00:06:34.225 "data_size": 63488 00:06:34.225 } 00:06:34.225 ] 00:06:34.225 } 00:06:34.225 } 00:06:34.225 }' 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:34.225 BaseBdev2' 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.225 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.225 [2024-11-26 13:19:22.768829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:34.225 [2024-11-26 13:19:22.768862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:34.225 [2024-11-26 13:19:22.768915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.485 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:34.485 "name": "Existed_Raid", 00:06:34.485 "uuid": "a88eb210-52c1-4583-b71a-03121ab2b8dc", 00:06:34.485 "strip_size_kb": 64, 00:06:34.485 "state": "offline", 00:06:34.485 "raid_level": "raid0", 00:06:34.485 "superblock": true, 00:06:34.485 "num_base_bdevs": 2, 00:06:34.485 "num_base_bdevs_discovered": 1, 00:06:34.485 "num_base_bdevs_operational": 1, 00:06:34.485 "base_bdevs_list": [ 00:06:34.485 { 00:06:34.485 "name": null, 00:06:34.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.486 "is_configured": false, 00:06:34.486 "data_offset": 0, 00:06:34.486 "data_size": 63488 00:06:34.486 }, 00:06:34.486 { 00:06:34.486 "name": "BaseBdev2", 00:06:34.486 "uuid": "121c434e-e792-434b-b05d-b80994a52f57", 00:06:34.486 "is_configured": true, 00:06:34.486 "data_offset": 2048, 00:06:34.486 "data_size": 63488 00:06:34.486 } 00:06:34.486 ] 00:06:34.486 }' 00:06:34.486 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:34.486 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:35.054 [2024-11-26 13:19:23.398031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:35.054 [2024-11-26 13:19:23.398088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60467 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60467 ']' 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60467 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60467 00:06:35.054 killing process with pid 60467 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60467' 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60467 00:06:35.054 [2024-11-26 13:19:23.558806] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:35.054 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60467 00:06:35.054 [2024-11-26 13:19:23.570913] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:35.991 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:35.991 00:06:35.991 real 0m5.211s 00:06:35.991 user 0m8.010s 00:06:35.991 sys 0m0.775s 00:06:35.991 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.991 ************************************ 00:06:35.991 END TEST raid_state_function_test_sb 00:06:35.991 ************************************ 00:06:35.991 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:35.991 13:19:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:35.992 13:19:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:35.992 13:19:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.992 13:19:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:35.992 ************************************ 00:06:35.992 START TEST raid_superblock_test 00:06:35.992 ************************************ 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60719 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60719 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60719 ']' 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.992 13:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.992 [2024-11-26 13:19:24.546377] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:35.992 [2024-11-26 13:19:24.546739] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60719 ] 00:06:36.251 [2024-11-26 13:19:24.704550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.251 [2024-11-26 13:19:24.811749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.510 [2024-11-26 13:19:24.980721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.510 [2024-11-26 13:19:24.981025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.078 malloc1 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.078 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.078 [2024-11-26 13:19:25.555048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:37.078 [2024-11-26 13:19:25.555323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.078 [2024-11-26 13:19:25.555409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:37.079 [2024-11-26 13:19:25.555668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.079 [2024-11-26 13:19:25.558273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.079 [2024-11-26 13:19:25.558316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:37.079 pt1 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.079 malloc2 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.079 [2024-11-26 13:19:25.600911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:37.079 [2024-11-26 13:19:25.600968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.079 [2024-11-26 13:19:25.600996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:37.079 [2024-11-26 13:19:25.601009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.079 [2024-11-26 13:19:25.603444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.079 [2024-11-26 13:19:25.603643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:37.079 pt2 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.079 [2024-11-26 13:19:25.612988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:37.079 [2024-11-26 13:19:25.615081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:37.079 [2024-11-26 13:19:25.615279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:37.079 [2024-11-26 13:19:25.615296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:37.079 [2024-11-26 13:19:25.615544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:37.079 [2024-11-26 13:19:25.615717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:37.079 [2024-11-26 13:19:25.615737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:06:37.079 [2024-11-26 13:19:25.615888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:37.079 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.337 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.337 "name": "raid_bdev1", 00:06:37.337 "uuid": "23111179-df3d-4993-9494-e593607c9c10", 00:06:37.337 "strip_size_kb": 64, 00:06:37.337 "state": "online", 00:06:37.337 "raid_level": "raid0", 00:06:37.337 "superblock": true, 00:06:37.337 "num_base_bdevs": 2, 00:06:37.337 "num_base_bdevs_discovered": 2, 00:06:37.337 "num_base_bdevs_operational": 2, 00:06:37.337 "base_bdevs_list": [ 00:06:37.337 { 00:06:37.337 "name": "pt1", 00:06:37.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:37.337 "is_configured": true, 00:06:37.337 "data_offset": 2048, 00:06:37.337 "data_size": 63488 00:06:37.337 }, 00:06:37.337 { 00:06:37.337 "name": "pt2", 00:06:37.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:37.337 "is_configured": true, 00:06:37.337 "data_offset": 2048, 00:06:37.337 "data_size": 63488 00:06:37.337 } 00:06:37.337 ] 00:06:37.338 }' 00:06:37.338 13:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.338 13:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.597 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:37.597 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:37.597 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:37.597 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:37.597 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:37.597 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:37.597 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:37.597 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:37.597 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.597 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.597 [2024-11-26 13:19:26.125343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:37.597 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.857 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:37.857 "name": "raid_bdev1", 00:06:37.857 "aliases": [ 00:06:37.857 "23111179-df3d-4993-9494-e593607c9c10" 00:06:37.857 ], 00:06:37.857 "product_name": "Raid Volume", 00:06:37.857 "block_size": 512, 00:06:37.857 "num_blocks": 126976, 00:06:37.857 "uuid": "23111179-df3d-4993-9494-e593607c9c10", 00:06:37.857 "assigned_rate_limits": { 00:06:37.857 "rw_ios_per_sec": 0, 00:06:37.857 "rw_mbytes_per_sec": 0, 00:06:37.857 "r_mbytes_per_sec": 0, 00:06:37.857 "w_mbytes_per_sec": 0 00:06:37.857 }, 00:06:37.857 "claimed": false, 00:06:37.857 "zoned": false, 00:06:37.857 "supported_io_types": { 00:06:37.857 "read": true, 00:06:37.857 "write": true, 00:06:37.857 "unmap": true, 00:06:37.857 "flush": true, 00:06:37.857 "reset": true, 00:06:37.857 "nvme_admin": false, 00:06:37.857 "nvme_io": false, 00:06:37.857 "nvme_io_md": false, 00:06:37.857 "write_zeroes": true, 00:06:37.857 "zcopy": false, 00:06:37.857 "get_zone_info": false, 00:06:37.857 "zone_management": false, 00:06:37.857 "zone_append": false, 00:06:37.857 "compare": false, 00:06:37.857 "compare_and_write": false, 00:06:37.857 "abort": false, 00:06:37.857 "seek_hole": false, 00:06:37.857 "seek_data": false, 00:06:37.857 "copy": false, 00:06:37.857 "nvme_iov_md": false 00:06:37.857 }, 00:06:37.857 "memory_domains": [ 00:06:37.857 { 00:06:37.857 "dma_device_id": "system", 00:06:37.857 "dma_device_type": 1 00:06:37.857 }, 00:06:37.857 { 00:06:37.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.857 "dma_device_type": 2 00:06:37.857 }, 00:06:37.857 { 00:06:37.857 "dma_device_id": "system", 00:06:37.857 "dma_device_type": 1 00:06:37.857 }, 00:06:37.857 { 00:06:37.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.857 "dma_device_type": 2 00:06:37.857 } 00:06:37.857 ], 00:06:37.857 "driver_specific": { 00:06:37.857 "raid": { 00:06:37.857 "uuid": "23111179-df3d-4993-9494-e593607c9c10", 00:06:37.858 "strip_size_kb": 64, 00:06:37.858 "state": "online", 00:06:37.858 "raid_level": "raid0", 00:06:37.858 "superblock": true, 00:06:37.858 "num_base_bdevs": 2, 00:06:37.858 "num_base_bdevs_discovered": 2, 00:06:37.858 "num_base_bdevs_operational": 2, 00:06:37.858 "base_bdevs_list": [ 00:06:37.858 { 00:06:37.858 "name": "pt1", 00:06:37.858 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:37.858 "is_configured": true, 00:06:37.858 "data_offset": 2048, 00:06:37.858 "data_size": 63488 00:06:37.858 }, 00:06:37.858 { 00:06:37.858 "name": "pt2", 00:06:37.858 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:37.858 "is_configured": true, 00:06:37.858 "data_offset": 2048, 00:06:37.858 "data_size": 63488 00:06:37.858 } 00:06:37.858 ] 00:06:37.858 } 00:06:37.858 } 00:06:37.858 }' 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:37.858 pt2' 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.858 [2024-11-26 13:19:26.369438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=23111179-df3d-4993-9494-e593607c9c10 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 23111179-df3d-4993-9494-e593607c9c10 ']' 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.858 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.858 [2024-11-26 13:19:26.421130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:38.118 [2024-11-26 13:19:26.421311] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:38.118 [2024-11-26 13:19:26.421407] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.118 [2024-11-26 13:19:26.421468] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.118 [2024-11-26 13:19:26.421491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.118 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.118 [2024-11-26 13:19:26.557172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:38.118 [2024-11-26 13:19:26.560511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:38.118 [2024-11-26 13:19:26.560655] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:38.118 [2024-11-26 13:19:26.560749] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:38.118 [2024-11-26 13:19:26.560783] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:38.118 [2024-11-26 13:19:26.560808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:06:38.118 request: 00:06:38.118 { 00:06:38.118 "name": "raid_bdev1", 00:06:38.119 "raid_level": "raid0", 00:06:38.119 "base_bdevs": [ 00:06:38.119 "malloc1", 00:06:38.119 "malloc2" 00:06:38.119 ], 00:06:38.119 "strip_size_kb": 64, 00:06:38.119 "superblock": false, 00:06:38.119 "method": "bdev_raid_create", 00:06:38.119 "req_id": 1 00:06:38.119 } 00:06:38.119 Got JSON-RPC error response 00:06:38.119 response: 00:06:38.119 { 00:06:38.119 "code": -17, 00:06:38.119 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:38.119 } 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.119 [2024-11-26 13:19:26.625532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:38.119 [2024-11-26 13:19:26.625785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:38.119 [2024-11-26 13:19:26.625835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:38.119 [2024-11-26 13:19:26.625859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:38.119 [2024-11-26 13:19:26.629484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:38.119 [2024-11-26 13:19:26.629546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:38.119 [2024-11-26 13:19:26.629661] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:38.119 [2024-11-26 13:19:26.629780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:38.119 pt1 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:38.119 "name": "raid_bdev1", 00:06:38.119 "uuid": "23111179-df3d-4993-9494-e593607c9c10", 00:06:38.119 "strip_size_kb": 64, 00:06:38.119 "state": "configuring", 00:06:38.119 "raid_level": "raid0", 00:06:38.119 "superblock": true, 00:06:38.119 "num_base_bdevs": 2, 00:06:38.119 "num_base_bdevs_discovered": 1, 00:06:38.119 "num_base_bdevs_operational": 2, 00:06:38.119 "base_bdevs_list": [ 00:06:38.119 { 00:06:38.119 "name": "pt1", 00:06:38.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:38.119 "is_configured": true, 00:06:38.119 "data_offset": 2048, 00:06:38.119 "data_size": 63488 00:06:38.119 }, 00:06:38.119 { 00:06:38.119 "name": null, 00:06:38.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:38.119 "is_configured": false, 00:06:38.119 "data_offset": 2048, 00:06:38.119 "data_size": 63488 00:06:38.119 } 00:06:38.119 ] 00:06:38.119 }' 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:38.119 13:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.688 [2024-11-26 13:19:27.129857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:38.688 [2024-11-26 13:19:27.130074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:38.688 [2024-11-26 13:19:27.130107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:06:38.688 [2024-11-26 13:19:27.130124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:38.688 [2024-11-26 13:19:27.130618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:38.688 [2024-11-26 13:19:27.130653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:38.688 [2024-11-26 13:19:27.130719] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:38.688 [2024-11-26 13:19:27.130748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:38.688 [2024-11-26 13:19:27.130860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:38.688 [2024-11-26 13:19:27.130885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:38.688 [2024-11-26 13:19:27.131135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:38.688 [2024-11-26 13:19:27.131341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:38.688 [2024-11-26 13:19:27.131356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:38.688 [2024-11-26 13:19:27.131511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.688 pt2 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.688 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:38.688 "name": "raid_bdev1", 00:06:38.688 "uuid": "23111179-df3d-4993-9494-e593607c9c10", 00:06:38.688 "strip_size_kb": 64, 00:06:38.688 "state": "online", 00:06:38.689 "raid_level": "raid0", 00:06:38.689 "superblock": true, 00:06:38.689 "num_base_bdevs": 2, 00:06:38.689 "num_base_bdevs_discovered": 2, 00:06:38.689 "num_base_bdevs_operational": 2, 00:06:38.689 "base_bdevs_list": [ 00:06:38.689 { 00:06:38.689 "name": "pt1", 00:06:38.689 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:38.689 "is_configured": true, 00:06:38.689 "data_offset": 2048, 00:06:38.689 "data_size": 63488 00:06:38.689 }, 00:06:38.689 { 00:06:38.689 "name": "pt2", 00:06:38.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:38.689 "is_configured": true, 00:06:38.689 "data_offset": 2048, 00:06:38.689 "data_size": 63488 00:06:38.689 } 00:06:38.689 ] 00:06:38.689 }' 00:06:38.689 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:38.689 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.256 [2024-11-26 13:19:27.622211] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:39.256 "name": "raid_bdev1", 00:06:39.256 "aliases": [ 00:06:39.256 "23111179-df3d-4993-9494-e593607c9c10" 00:06:39.256 ], 00:06:39.256 "product_name": "Raid Volume", 00:06:39.256 "block_size": 512, 00:06:39.256 "num_blocks": 126976, 00:06:39.256 "uuid": "23111179-df3d-4993-9494-e593607c9c10", 00:06:39.256 "assigned_rate_limits": { 00:06:39.256 "rw_ios_per_sec": 0, 00:06:39.256 "rw_mbytes_per_sec": 0, 00:06:39.256 "r_mbytes_per_sec": 0, 00:06:39.256 "w_mbytes_per_sec": 0 00:06:39.256 }, 00:06:39.256 "claimed": false, 00:06:39.256 "zoned": false, 00:06:39.256 "supported_io_types": { 00:06:39.256 "read": true, 00:06:39.256 "write": true, 00:06:39.256 "unmap": true, 00:06:39.256 "flush": true, 00:06:39.256 "reset": true, 00:06:39.256 "nvme_admin": false, 00:06:39.256 "nvme_io": false, 00:06:39.256 "nvme_io_md": false, 00:06:39.256 "write_zeroes": true, 00:06:39.256 "zcopy": false, 00:06:39.256 "get_zone_info": false, 00:06:39.256 "zone_management": false, 00:06:39.256 "zone_append": false, 00:06:39.256 "compare": false, 00:06:39.256 "compare_and_write": false, 00:06:39.256 "abort": false, 00:06:39.256 "seek_hole": false, 00:06:39.256 "seek_data": false, 00:06:39.256 "copy": false, 00:06:39.256 "nvme_iov_md": false 00:06:39.256 }, 00:06:39.256 "memory_domains": [ 00:06:39.256 { 00:06:39.256 "dma_device_id": "system", 00:06:39.256 "dma_device_type": 1 00:06:39.256 }, 00:06:39.256 { 00:06:39.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:39.256 "dma_device_type": 2 00:06:39.256 }, 00:06:39.256 { 00:06:39.256 "dma_device_id": "system", 00:06:39.256 "dma_device_type": 1 00:06:39.256 }, 00:06:39.256 { 00:06:39.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:39.256 "dma_device_type": 2 00:06:39.256 } 00:06:39.256 ], 00:06:39.256 "driver_specific": { 00:06:39.256 "raid": { 00:06:39.256 "uuid": "23111179-df3d-4993-9494-e593607c9c10", 00:06:39.256 "strip_size_kb": 64, 00:06:39.256 "state": "online", 00:06:39.256 "raid_level": "raid0", 00:06:39.256 "superblock": true, 00:06:39.256 "num_base_bdevs": 2, 00:06:39.256 "num_base_bdevs_discovered": 2, 00:06:39.256 "num_base_bdevs_operational": 2, 00:06:39.256 "base_bdevs_list": [ 00:06:39.256 { 00:06:39.256 "name": "pt1", 00:06:39.256 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:39.256 "is_configured": true, 00:06:39.256 "data_offset": 2048, 00:06:39.256 "data_size": 63488 00:06:39.256 }, 00:06:39.256 { 00:06:39.256 "name": "pt2", 00:06:39.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:39.256 "is_configured": true, 00:06:39.256 "data_offset": 2048, 00:06:39.256 "data_size": 63488 00:06:39.256 } 00:06:39.256 ] 00:06:39.256 } 00:06:39.256 } 00:06:39.256 }' 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:39.256 pt2' 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.256 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.516 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.516 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:39.516 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:39.516 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:39.516 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:39.516 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.516 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.516 [2024-11-26 13:19:27.870295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:39.516 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.516 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 23111179-df3d-4993-9494-e593607c9c10 '!=' 23111179-df3d-4993-9494-e593607c9c10 ']' 00:06:39.516 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:06:39.516 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:39.516 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:39.516 13:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60719 00:06:39.516 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60719 ']' 00:06:39.516 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60719 00:06:39.517 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:39.517 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.517 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60719 00:06:39.517 killing process with pid 60719 00:06:39.517 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.517 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.517 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60719' 00:06:39.517 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 60719 00:06:39.517 [2024-11-26 13:19:27.952501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:39.517 [2024-11-26 13:19:27.952575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:39.517 13:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 60719 00:06:39.517 [2024-11-26 13:19:27.952620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:39.517 [2024-11-26 13:19:27.952638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:39.776 [2024-11-26 13:19:28.100520] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:40.714 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:40.714 00:06:40.714 real 0m4.546s 00:06:40.714 user 0m6.797s 00:06:40.714 sys 0m0.632s 00:06:40.714 13:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.714 ************************************ 00:06:40.714 END TEST raid_superblock_test 00:06:40.714 ************************************ 00:06:40.714 13:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.714 13:19:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:06:40.714 13:19:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:40.714 13:19:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.714 13:19:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:40.714 ************************************ 00:06:40.714 START TEST raid_read_error_test 00:06:40.714 ************************************ 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ncUcEM2vSr 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60931 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60931 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 60931 ']' 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.714 13:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.714 [2024-11-26 13:19:29.172125] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:40.714 [2024-11-26 13:19:29.172280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60931 ] 00:06:40.976 [2024-11-26 13:19:29.337121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.976 [2024-11-26 13:19:29.448695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.235 [2024-11-26 13:19:29.642692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.235 [2024-11-26 13:19:29.642773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.802 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.802 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:41.802 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:41.802 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.803 BaseBdev1_malloc 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.803 true 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.803 [2024-11-26 13:19:30.124109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:41.803 [2024-11-26 13:19:30.124174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.803 [2024-11-26 13:19:30.124200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:41.803 [2024-11-26 13:19:30.124216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.803 [2024-11-26 13:19:30.126735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.803 [2024-11-26 13:19:30.126928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:41.803 BaseBdev1 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.803 BaseBdev2_malloc 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.803 true 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.803 [2024-11-26 13:19:30.177534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:41.803 [2024-11-26 13:19:30.177783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.803 [2024-11-26 13:19:30.177816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:41.803 [2024-11-26 13:19:30.177833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.803 [2024-11-26 13:19:30.180450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.803 [2024-11-26 13:19:30.180493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:41.803 BaseBdev2 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.803 [2024-11-26 13:19:30.185608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:41.803 [2024-11-26 13:19:30.187891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:41.803 [2024-11-26 13:19:30.188286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:41.803 [2024-11-26 13:19:30.188317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:41.803 [2024-11-26 13:19:30.188582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:41.803 [2024-11-26 13:19:30.188791] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:41.803 [2024-11-26 13:19:30.188808] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:41.803 [2024-11-26 13:19:30.188969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:41.803 "name": "raid_bdev1", 00:06:41.803 "uuid": "76b5b7f1-5223-414a-b19a-be5d42074a15", 00:06:41.803 "strip_size_kb": 64, 00:06:41.803 "state": "online", 00:06:41.803 "raid_level": "raid0", 00:06:41.803 "superblock": true, 00:06:41.803 "num_base_bdevs": 2, 00:06:41.803 "num_base_bdevs_discovered": 2, 00:06:41.803 "num_base_bdevs_operational": 2, 00:06:41.803 "base_bdevs_list": [ 00:06:41.803 { 00:06:41.803 "name": "BaseBdev1", 00:06:41.803 "uuid": "c6f9a4b5-5b76-5699-a48c-4476e08f04b6", 00:06:41.803 "is_configured": true, 00:06:41.803 "data_offset": 2048, 00:06:41.803 "data_size": 63488 00:06:41.803 }, 00:06:41.803 { 00:06:41.803 "name": "BaseBdev2", 00:06:41.803 "uuid": "2b96a511-30bb-5e50-9a49-fde17e321d18", 00:06:41.803 "is_configured": true, 00:06:41.803 "data_offset": 2048, 00:06:41.803 "data_size": 63488 00:06:41.803 } 00:06:41.803 ] 00:06:41.803 }' 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:41.803 13:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.372 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:42.372 13:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:42.372 [2024-11-26 13:19:30.818918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:43.312 "name": "raid_bdev1", 00:06:43.312 "uuid": "76b5b7f1-5223-414a-b19a-be5d42074a15", 00:06:43.312 "strip_size_kb": 64, 00:06:43.312 "state": "online", 00:06:43.312 "raid_level": "raid0", 00:06:43.312 "superblock": true, 00:06:43.312 "num_base_bdevs": 2, 00:06:43.312 "num_base_bdevs_discovered": 2, 00:06:43.312 "num_base_bdevs_operational": 2, 00:06:43.312 "base_bdevs_list": [ 00:06:43.312 { 00:06:43.312 "name": "BaseBdev1", 00:06:43.312 "uuid": "c6f9a4b5-5b76-5699-a48c-4476e08f04b6", 00:06:43.312 "is_configured": true, 00:06:43.312 "data_offset": 2048, 00:06:43.312 "data_size": 63488 00:06:43.312 }, 00:06:43.312 { 00:06:43.312 "name": "BaseBdev2", 00:06:43.312 "uuid": "2b96a511-30bb-5e50-9a49-fde17e321d18", 00:06:43.312 "is_configured": true, 00:06:43.312 "data_offset": 2048, 00:06:43.312 "data_size": 63488 00:06:43.312 } 00:06:43.312 ] 00:06:43.312 }' 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:43.312 13:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.882 13:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:43.882 13:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.882 13:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.882 [2024-11-26 13:19:32.222653] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:43.882 [2024-11-26 13:19:32.222695] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:43.882 [2024-11-26 13:19:32.225608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.882 [2024-11-26 13:19:32.225836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.882 [2024-11-26 13:19:32.225929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.882 [2024-11-26 13:19:32.226195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:43.882 { 00:06:43.882 "results": [ 00:06:43.882 { 00:06:43.882 "job": "raid_bdev1", 00:06:43.882 "core_mask": "0x1", 00:06:43.882 "workload": "randrw", 00:06:43.882 "percentage": 50, 00:06:43.882 "status": "finished", 00:06:43.882 "queue_depth": 1, 00:06:43.882 "io_size": 131072, 00:06:43.882 "runtime": 1.401571, 00:06:43.882 "iops": 13175.215526006174, 00:06:43.882 "mibps": 1646.9019407507717, 00:06:43.882 "io_failed": 1, 00:06:43.882 "io_timeout": 0, 00:06:43.882 "avg_latency_us": 106.48715694334366, 00:06:43.882 "min_latency_us": 34.67636363636364, 00:06:43.882 "max_latency_us": 1668.189090909091 00:06:43.882 } 00:06:43.882 ], 00:06:43.882 "core_count": 1 00:06:43.882 } 00:06:43.882 13:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.882 13:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60931 00:06:43.882 13:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 60931 ']' 00:06:43.882 13:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 60931 00:06:43.882 13:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:06:43.882 13:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.882 13:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60931 00:06:43.882 killing process with pid 60931 00:06:43.882 13:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.882 13:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.882 13:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60931' 00:06:43.883 13:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 60931 00:06:43.883 [2024-11-26 13:19:32.261010] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.883 13:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 60931 00:06:43.883 [2024-11-26 13:19:32.356704] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:44.821 13:19:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ncUcEM2vSr 00:06:44.821 13:19:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:44.821 13:19:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:44.821 13:19:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:06:44.821 13:19:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:44.821 13:19:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:44.821 13:19:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:44.821 ************************************ 00:06:44.821 END TEST raid_read_error_test 00:06:44.821 ************************************ 00:06:44.821 13:19:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:06:44.821 00:06:44.821 real 0m4.248s 00:06:44.821 user 0m5.282s 00:06:44.821 sys 0m0.570s 00:06:44.821 13:19:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.821 13:19:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.821 13:19:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:06:44.821 13:19:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:44.821 13:19:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.821 13:19:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:44.821 ************************************ 00:06:44.821 START TEST raid_write_error_test 00:06:44.821 ************************************ 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:44.821 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:45.081 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7eyCbrEUaL 00:06:45.081 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61071 00:06:45.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.081 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61071 00:06:45.081 13:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61071 ']' 00:06:45.081 13:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.081 13:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.081 13:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.081 13:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.081 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:45.081 13:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.081 [2024-11-26 13:19:33.495310] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:45.081 [2024-11-26 13:19:33.495495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61071 ] 00:06:45.341 [2024-11-26 13:19:33.676793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.341 [2024-11-26 13:19:33.797740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.600 [2024-11-26 13:19:33.988440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:45.600 [2024-11-26 13:19:33.988517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:45.859 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.859 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:45.859 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:45.859 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:45.859 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.859 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.119 BaseBdev1_malloc 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.120 true 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.120 [2024-11-26 13:19:34.447699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:46.120 [2024-11-26 13:19:34.447768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:46.120 [2024-11-26 13:19:34.447795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:46.120 [2024-11-26 13:19:34.447810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:46.120 [2024-11-26 13:19:34.450415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:46.120 [2024-11-26 13:19:34.450458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:46.120 BaseBdev1 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.120 BaseBdev2_malloc 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.120 true 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.120 [2024-11-26 13:19:34.501123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:46.120 [2024-11-26 13:19:34.501178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:46.120 [2024-11-26 13:19:34.501203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:46.120 [2024-11-26 13:19:34.501217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:46.120 [2024-11-26 13:19:34.503824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:46.120 [2024-11-26 13:19:34.503866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:46.120 BaseBdev2 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.120 [2024-11-26 13:19:34.509199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:46.120 [2024-11-26 13:19:34.511527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:46.120 [2024-11-26 13:19:34.511756] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:46.120 [2024-11-26 13:19:34.511780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:46.120 [2024-11-26 13:19:34.512027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:46.120 [2024-11-26 13:19:34.512256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:46.120 [2024-11-26 13:19:34.512276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:46.120 [2024-11-26 13:19:34.512437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:46.120 "name": "raid_bdev1", 00:06:46.120 "uuid": "c7f3235a-dc13-45ec-aba1-0196822ba94c", 00:06:46.120 "strip_size_kb": 64, 00:06:46.120 "state": "online", 00:06:46.120 "raid_level": "raid0", 00:06:46.120 "superblock": true, 00:06:46.120 "num_base_bdevs": 2, 00:06:46.120 "num_base_bdevs_discovered": 2, 00:06:46.120 "num_base_bdevs_operational": 2, 00:06:46.120 "base_bdevs_list": [ 00:06:46.120 { 00:06:46.120 "name": "BaseBdev1", 00:06:46.120 "uuid": "1860fadd-8233-5612-86bd-ded7e5f42f4d", 00:06:46.120 "is_configured": true, 00:06:46.120 "data_offset": 2048, 00:06:46.120 "data_size": 63488 00:06:46.120 }, 00:06:46.120 { 00:06:46.120 "name": "BaseBdev2", 00:06:46.120 "uuid": "fefd420c-5291-54de-a040-dc990fe86c82", 00:06:46.120 "is_configured": true, 00:06:46.120 "data_offset": 2048, 00:06:46.120 "data_size": 63488 00:06:46.120 } 00:06:46.120 ] 00:06:46.120 }' 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:46.120 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.689 13:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:46.689 13:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:46.689 [2024-11-26 13:19:35.138463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:47.629 "name": "raid_bdev1", 00:06:47.629 "uuid": "c7f3235a-dc13-45ec-aba1-0196822ba94c", 00:06:47.629 "strip_size_kb": 64, 00:06:47.629 "state": "online", 00:06:47.629 "raid_level": "raid0", 00:06:47.629 "superblock": true, 00:06:47.629 "num_base_bdevs": 2, 00:06:47.629 "num_base_bdevs_discovered": 2, 00:06:47.629 "num_base_bdevs_operational": 2, 00:06:47.629 "base_bdevs_list": [ 00:06:47.629 { 00:06:47.629 "name": "BaseBdev1", 00:06:47.629 "uuid": "1860fadd-8233-5612-86bd-ded7e5f42f4d", 00:06:47.629 "is_configured": true, 00:06:47.629 "data_offset": 2048, 00:06:47.629 "data_size": 63488 00:06:47.629 }, 00:06:47.629 { 00:06:47.629 "name": "BaseBdev2", 00:06:47.629 "uuid": "fefd420c-5291-54de-a040-dc990fe86c82", 00:06:47.629 "is_configured": true, 00:06:47.629 "data_offset": 2048, 00:06:47.629 "data_size": 63488 00:06:47.629 } 00:06:47.629 ] 00:06:47.629 }' 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:47.629 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.198 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:48.198 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.198 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.198 [2024-11-26 13:19:36.546682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:48.198 [2024-11-26 13:19:36.546726] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:48.198 [2024-11-26 13:19:36.549778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:48.198 [2024-11-26 13:19:36.550450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:48.198 [2024-11-26 13:19:36.550536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:48.198 [2024-11-26 13:19:36.550574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:48.198 { 00:06:48.198 "results": [ 00:06:48.198 { 00:06:48.198 "job": "raid_bdev1", 00:06:48.198 "core_mask": "0x1", 00:06:48.198 "workload": "randrw", 00:06:48.198 "percentage": 50, 00:06:48.198 "status": "finished", 00:06:48.198 "queue_depth": 1, 00:06:48.198 "io_size": 131072, 00:06:48.198 "runtime": 1.406198, 00:06:48.198 "iops": 13268.401747122383, 00:06:48.198 "mibps": 1658.550218390298, 00:06:48.198 "io_failed": 1, 00:06:48.198 "io_timeout": 0, 00:06:48.198 "avg_latency_us": 105.48880939736613, 00:06:48.198 "min_latency_us": 34.90909090909091, 00:06:48.198 "max_latency_us": 1541.5854545454545 00:06:48.198 } 00:06:48.198 ], 00:06:48.198 "core_count": 1 00:06:48.198 } 00:06:48.198 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.198 13:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61071 00:06:48.198 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61071 ']' 00:06:48.198 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61071 00:06:48.198 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:06:48.198 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.198 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61071 00:06:48.198 killing process with pid 61071 00:06:48.198 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.198 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.198 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61071' 00:06:48.198 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61071 00:06:48.198 [2024-11-26 13:19:36.589120] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:48.198 13:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61071 00:06:48.198 [2024-11-26 13:19:36.685201] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:49.136 13:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7eyCbrEUaL 00:06:49.136 13:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:49.136 13:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:49.136 13:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:06:49.136 13:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:49.136 13:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:49.136 13:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:49.136 13:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:06:49.136 00:06:49.136 real 0m4.204s 00:06:49.136 user 0m5.277s 00:06:49.136 sys 0m0.554s 00:06:49.136 ************************************ 00:06:49.136 END TEST raid_write_error_test 00:06:49.136 ************************************ 00:06:49.136 13:19:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.136 13:19:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.136 13:19:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:49.136 13:19:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:06:49.136 13:19:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:49.136 13:19:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.136 13:19:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:49.136 ************************************ 00:06:49.136 START TEST raid_state_function_test 00:06:49.136 ************************************ 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:49.136 Process raid pid: 61209 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61209 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61209' 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61209 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61209 ']' 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.136 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.395 [2024-11-26 13:19:37.749450] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:49.395 [2024-11-26 13:19:37.749639] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.395 [2024-11-26 13:19:37.932931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.652 [2024-11-26 13:19:38.044561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.652 [2024-11-26 13:19:38.215244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.652 [2024-11-26 13:19:38.215285] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.218 [2024-11-26 13:19:38.674104] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:50.218 [2024-11-26 13:19:38.674183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:50.218 [2024-11-26 13:19:38.674199] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:50.218 [2024-11-26 13:19:38.674213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:50.218 "name": "Existed_Raid", 00:06:50.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.218 "strip_size_kb": 64, 00:06:50.218 "state": "configuring", 00:06:50.218 "raid_level": "concat", 00:06:50.218 "superblock": false, 00:06:50.218 "num_base_bdevs": 2, 00:06:50.218 "num_base_bdevs_discovered": 0, 00:06:50.218 "num_base_bdevs_operational": 2, 00:06:50.218 "base_bdevs_list": [ 00:06:50.218 { 00:06:50.218 "name": "BaseBdev1", 00:06:50.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.218 "is_configured": false, 00:06:50.218 "data_offset": 0, 00:06:50.218 "data_size": 0 00:06:50.218 }, 00:06:50.218 { 00:06:50.218 "name": "BaseBdev2", 00:06:50.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.218 "is_configured": false, 00:06:50.218 "data_offset": 0, 00:06:50.218 "data_size": 0 00:06:50.218 } 00:06:50.218 ] 00:06:50.218 }' 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:50.218 13:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.825 [2024-11-26 13:19:39.218193] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:50.825 [2024-11-26 13:19:39.218451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.825 [2024-11-26 13:19:39.226196] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:50.825 [2024-11-26 13:19:39.226438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:50.825 [2024-11-26 13:19:39.226463] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:50.825 [2024-11-26 13:19:39.226484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.825 [2024-11-26 13:19:39.265591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:50.825 BaseBdev1 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.825 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.825 [ 00:06:50.825 { 00:06:50.825 "name": "BaseBdev1", 00:06:50.825 "aliases": [ 00:06:50.825 "0b55fe46-91d3-4027-a8ba-e26839bf7bbf" 00:06:50.825 ], 00:06:50.825 "product_name": "Malloc disk", 00:06:50.825 "block_size": 512, 00:06:50.825 "num_blocks": 65536, 00:06:50.825 "uuid": "0b55fe46-91d3-4027-a8ba-e26839bf7bbf", 00:06:50.825 "assigned_rate_limits": { 00:06:50.825 "rw_ios_per_sec": 0, 00:06:50.825 "rw_mbytes_per_sec": 0, 00:06:50.825 "r_mbytes_per_sec": 0, 00:06:50.825 "w_mbytes_per_sec": 0 00:06:50.825 }, 00:06:50.825 "claimed": true, 00:06:50.825 "claim_type": "exclusive_write", 00:06:50.825 "zoned": false, 00:06:50.825 "supported_io_types": { 00:06:50.825 "read": true, 00:06:50.825 "write": true, 00:06:50.825 "unmap": true, 00:06:50.825 "flush": true, 00:06:50.825 "reset": true, 00:06:50.825 "nvme_admin": false, 00:06:50.825 "nvme_io": false, 00:06:50.825 "nvme_io_md": false, 00:06:50.825 "write_zeroes": true, 00:06:50.825 "zcopy": true, 00:06:50.825 "get_zone_info": false, 00:06:50.826 "zone_management": false, 00:06:50.826 "zone_append": false, 00:06:50.826 "compare": false, 00:06:50.826 "compare_and_write": false, 00:06:50.826 "abort": true, 00:06:50.826 "seek_hole": false, 00:06:50.826 "seek_data": false, 00:06:50.826 "copy": true, 00:06:50.826 "nvme_iov_md": false 00:06:50.826 }, 00:06:50.826 "memory_domains": [ 00:06:50.826 { 00:06:50.826 "dma_device_id": "system", 00:06:50.826 "dma_device_type": 1 00:06:50.826 }, 00:06:50.826 { 00:06:50.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.826 "dma_device_type": 2 00:06:50.826 } 00:06:50.826 ], 00:06:50.826 "driver_specific": {} 00:06:50.826 } 00:06:50.826 ] 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:50.826 "name": "Existed_Raid", 00:06:50.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.826 "strip_size_kb": 64, 00:06:50.826 "state": "configuring", 00:06:50.826 "raid_level": "concat", 00:06:50.826 "superblock": false, 00:06:50.826 "num_base_bdevs": 2, 00:06:50.826 "num_base_bdevs_discovered": 1, 00:06:50.826 "num_base_bdevs_operational": 2, 00:06:50.826 "base_bdevs_list": [ 00:06:50.826 { 00:06:50.826 "name": "BaseBdev1", 00:06:50.826 "uuid": "0b55fe46-91d3-4027-a8ba-e26839bf7bbf", 00:06:50.826 "is_configured": true, 00:06:50.826 "data_offset": 0, 00:06:50.826 "data_size": 65536 00:06:50.826 }, 00:06:50.826 { 00:06:50.826 "name": "BaseBdev2", 00:06:50.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.826 "is_configured": false, 00:06:50.826 "data_offset": 0, 00:06:50.826 "data_size": 0 00:06:50.826 } 00:06:50.826 ] 00:06:50.826 }' 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:50.826 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.391 [2024-11-26 13:19:39.809763] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:51.391 [2024-11-26 13:19:39.809801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.391 [2024-11-26 13:19:39.817846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:51.391 [2024-11-26 13:19:39.819963] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:51.391 [2024-11-26 13:19:39.820007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.391 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.391 "name": "Existed_Raid", 00:06:51.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.391 "strip_size_kb": 64, 00:06:51.391 "state": "configuring", 00:06:51.391 "raid_level": "concat", 00:06:51.392 "superblock": false, 00:06:51.392 "num_base_bdevs": 2, 00:06:51.392 "num_base_bdevs_discovered": 1, 00:06:51.392 "num_base_bdevs_operational": 2, 00:06:51.392 "base_bdevs_list": [ 00:06:51.392 { 00:06:51.392 "name": "BaseBdev1", 00:06:51.392 "uuid": "0b55fe46-91d3-4027-a8ba-e26839bf7bbf", 00:06:51.392 "is_configured": true, 00:06:51.392 "data_offset": 0, 00:06:51.392 "data_size": 65536 00:06:51.392 }, 00:06:51.392 { 00:06:51.392 "name": "BaseBdev2", 00:06:51.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.392 "is_configured": false, 00:06:51.392 "data_offset": 0, 00:06:51.392 "data_size": 0 00:06:51.392 } 00:06:51.392 ] 00:06:51.392 }' 00:06:51.392 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.392 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.957 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:51.957 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.957 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.957 [2024-11-26 13:19:40.375616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:51.957 [2024-11-26 13:19:40.375664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:51.957 [2024-11-26 13:19:40.375676] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:51.957 [2024-11-26 13:19:40.375950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:51.957 [2024-11-26 13:19:40.376130] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:51.957 [2024-11-26 13:19:40.376151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:51.958 [2024-11-26 13:19:40.376440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.958 BaseBdev2 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.958 [ 00:06:51.958 { 00:06:51.958 "name": "BaseBdev2", 00:06:51.958 "aliases": [ 00:06:51.958 "1f573351-d571-46c2-8e98-d09f79078a48" 00:06:51.958 ], 00:06:51.958 "product_name": "Malloc disk", 00:06:51.958 "block_size": 512, 00:06:51.958 "num_blocks": 65536, 00:06:51.958 "uuid": "1f573351-d571-46c2-8e98-d09f79078a48", 00:06:51.958 "assigned_rate_limits": { 00:06:51.958 "rw_ios_per_sec": 0, 00:06:51.958 "rw_mbytes_per_sec": 0, 00:06:51.958 "r_mbytes_per_sec": 0, 00:06:51.958 "w_mbytes_per_sec": 0 00:06:51.958 }, 00:06:51.958 "claimed": true, 00:06:51.958 "claim_type": "exclusive_write", 00:06:51.958 "zoned": false, 00:06:51.958 "supported_io_types": { 00:06:51.958 "read": true, 00:06:51.958 "write": true, 00:06:51.958 "unmap": true, 00:06:51.958 "flush": true, 00:06:51.958 "reset": true, 00:06:51.958 "nvme_admin": false, 00:06:51.958 "nvme_io": false, 00:06:51.958 "nvme_io_md": false, 00:06:51.958 "write_zeroes": true, 00:06:51.958 "zcopy": true, 00:06:51.958 "get_zone_info": false, 00:06:51.958 "zone_management": false, 00:06:51.958 "zone_append": false, 00:06:51.958 "compare": false, 00:06:51.958 "compare_and_write": false, 00:06:51.958 "abort": true, 00:06:51.958 "seek_hole": false, 00:06:51.958 "seek_data": false, 00:06:51.958 "copy": true, 00:06:51.958 "nvme_iov_md": false 00:06:51.958 }, 00:06:51.958 "memory_domains": [ 00:06:51.958 { 00:06:51.958 "dma_device_id": "system", 00:06:51.958 "dma_device_type": 1 00:06:51.958 }, 00:06:51.958 { 00:06:51.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.958 "dma_device_type": 2 00:06:51.958 } 00:06:51.958 ], 00:06:51.958 "driver_specific": {} 00:06:51.958 } 00:06:51.958 ] 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.958 "name": "Existed_Raid", 00:06:51.958 "uuid": "e28ced6c-b0dd-4fb4-871e-c59e1711b884", 00:06:51.958 "strip_size_kb": 64, 00:06:51.958 "state": "online", 00:06:51.958 "raid_level": "concat", 00:06:51.958 "superblock": false, 00:06:51.958 "num_base_bdevs": 2, 00:06:51.958 "num_base_bdevs_discovered": 2, 00:06:51.958 "num_base_bdevs_operational": 2, 00:06:51.958 "base_bdevs_list": [ 00:06:51.958 { 00:06:51.958 "name": "BaseBdev1", 00:06:51.958 "uuid": "0b55fe46-91d3-4027-a8ba-e26839bf7bbf", 00:06:51.958 "is_configured": true, 00:06:51.958 "data_offset": 0, 00:06:51.958 "data_size": 65536 00:06:51.958 }, 00:06:51.958 { 00:06:51.958 "name": "BaseBdev2", 00:06:51.958 "uuid": "1f573351-d571-46c2-8e98-d09f79078a48", 00:06:51.958 "is_configured": true, 00:06:51.958 "data_offset": 0, 00:06:51.958 "data_size": 65536 00:06:51.958 } 00:06:51.958 ] 00:06:51.958 }' 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.958 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.524 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:52.524 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:52.524 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:52.524 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:52.524 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:52.524 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:52.524 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:52.524 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:52.524 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.524 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.524 [2024-11-26 13:19:40.936058] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.524 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.524 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:52.524 "name": "Existed_Raid", 00:06:52.524 "aliases": [ 00:06:52.524 "e28ced6c-b0dd-4fb4-871e-c59e1711b884" 00:06:52.524 ], 00:06:52.524 "product_name": "Raid Volume", 00:06:52.524 "block_size": 512, 00:06:52.524 "num_blocks": 131072, 00:06:52.524 "uuid": "e28ced6c-b0dd-4fb4-871e-c59e1711b884", 00:06:52.524 "assigned_rate_limits": { 00:06:52.524 "rw_ios_per_sec": 0, 00:06:52.524 "rw_mbytes_per_sec": 0, 00:06:52.524 "r_mbytes_per_sec": 0, 00:06:52.524 "w_mbytes_per_sec": 0 00:06:52.524 }, 00:06:52.524 "claimed": false, 00:06:52.524 "zoned": false, 00:06:52.524 "supported_io_types": { 00:06:52.524 "read": true, 00:06:52.524 "write": true, 00:06:52.524 "unmap": true, 00:06:52.524 "flush": true, 00:06:52.524 "reset": true, 00:06:52.524 "nvme_admin": false, 00:06:52.524 "nvme_io": false, 00:06:52.524 "nvme_io_md": false, 00:06:52.524 "write_zeroes": true, 00:06:52.524 "zcopy": false, 00:06:52.524 "get_zone_info": false, 00:06:52.524 "zone_management": false, 00:06:52.524 "zone_append": false, 00:06:52.524 "compare": false, 00:06:52.524 "compare_and_write": false, 00:06:52.524 "abort": false, 00:06:52.524 "seek_hole": false, 00:06:52.524 "seek_data": false, 00:06:52.524 "copy": false, 00:06:52.524 "nvme_iov_md": false 00:06:52.524 }, 00:06:52.524 "memory_domains": [ 00:06:52.524 { 00:06:52.524 "dma_device_id": "system", 00:06:52.524 "dma_device_type": 1 00:06:52.524 }, 00:06:52.524 { 00:06:52.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.524 "dma_device_type": 2 00:06:52.524 }, 00:06:52.524 { 00:06:52.524 "dma_device_id": "system", 00:06:52.524 "dma_device_type": 1 00:06:52.524 }, 00:06:52.524 { 00:06:52.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.524 "dma_device_type": 2 00:06:52.524 } 00:06:52.524 ], 00:06:52.524 "driver_specific": { 00:06:52.524 "raid": { 00:06:52.524 "uuid": "e28ced6c-b0dd-4fb4-871e-c59e1711b884", 00:06:52.524 "strip_size_kb": 64, 00:06:52.524 "state": "online", 00:06:52.524 "raid_level": "concat", 00:06:52.524 "superblock": false, 00:06:52.524 "num_base_bdevs": 2, 00:06:52.524 "num_base_bdevs_discovered": 2, 00:06:52.524 "num_base_bdevs_operational": 2, 00:06:52.524 "base_bdevs_list": [ 00:06:52.524 { 00:06:52.524 "name": "BaseBdev1", 00:06:52.524 "uuid": "0b55fe46-91d3-4027-a8ba-e26839bf7bbf", 00:06:52.524 "is_configured": true, 00:06:52.524 "data_offset": 0, 00:06:52.524 "data_size": 65536 00:06:52.524 }, 00:06:52.524 { 00:06:52.524 "name": "BaseBdev2", 00:06:52.524 "uuid": "1f573351-d571-46c2-8e98-d09f79078a48", 00:06:52.524 "is_configured": true, 00:06:52.524 "data_offset": 0, 00:06:52.524 "data_size": 65536 00:06:52.524 } 00:06:52.524 ] 00:06:52.524 } 00:06:52.524 } 00:06:52.524 }' 00:06:52.524 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:52.524 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:52.524 BaseBdev2' 00:06:52.524 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:52.524 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:52.524 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.784 [2024-11-26 13:19:41.195905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:52.784 [2024-11-26 13:19:41.196083] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:52.784 [2024-11-26 13:19:41.196156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.784 "name": "Existed_Raid", 00:06:52.784 "uuid": "e28ced6c-b0dd-4fb4-871e-c59e1711b884", 00:06:52.784 "strip_size_kb": 64, 00:06:52.784 "state": "offline", 00:06:52.784 "raid_level": "concat", 00:06:52.784 "superblock": false, 00:06:52.784 "num_base_bdevs": 2, 00:06:52.784 "num_base_bdevs_discovered": 1, 00:06:52.784 "num_base_bdevs_operational": 1, 00:06:52.784 "base_bdevs_list": [ 00:06:52.784 { 00:06:52.784 "name": null, 00:06:52.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.784 "is_configured": false, 00:06:52.784 "data_offset": 0, 00:06:52.784 "data_size": 65536 00:06:52.784 }, 00:06:52.784 { 00:06:52.784 "name": "BaseBdev2", 00:06:52.784 "uuid": "1f573351-d571-46c2-8e98-d09f79078a48", 00:06:52.784 "is_configured": true, 00:06:52.784 "data_offset": 0, 00:06:52.784 "data_size": 65536 00:06:52.784 } 00:06:52.784 ] 00:06:52.784 }' 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.784 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.365 [2024-11-26 13:19:41.830161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:53.365 [2024-11-26 13:19:41.830221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.365 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.662 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:53.662 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:53.662 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:53.662 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61209 00:06:53.662 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61209 ']' 00:06:53.662 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61209 00:06:53.662 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:53.662 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.662 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61209 00:06:53.662 killing process with pid 61209 00:06:53.662 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.662 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.662 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61209' 00:06:53.662 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61209 00:06:53.662 [2024-11-26 13:19:41.986203] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:53.662 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61209 00:06:53.662 [2024-11-26 13:19:41.998140] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:54.613 00:06:54.613 real 0m5.210s 00:06:54.613 user 0m8.009s 00:06:54.613 sys 0m0.749s 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.613 ************************************ 00:06:54.613 END TEST raid_state_function_test 00:06:54.613 ************************************ 00:06:54.613 13:19:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:06:54.613 13:19:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:54.613 13:19:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.613 13:19:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.613 ************************************ 00:06:54.613 START TEST raid_state_function_test_sb 00:06:54.613 ************************************ 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:54.613 Process raid pid: 61462 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61462 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61462' 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61462 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61462 ']' 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.613 13:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.613 [2024-11-26 13:19:43.010094] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:54.613 [2024-11-26 13:19:43.010305] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.873 [2024-11-26 13:19:43.191738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.873 [2024-11-26 13:19:43.303826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.132 [2024-11-26 13:19:43.474800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.132 [2024-11-26 13:19:43.474843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.700 [2024-11-26 13:19:43.964504] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:55.700 [2024-11-26 13:19:43.964579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:55.700 [2024-11-26 13:19:43.964594] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:55.700 [2024-11-26 13:19:43.964625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.700 13:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.700 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.700 "name": "Existed_Raid", 00:06:55.700 "uuid": "f107f945-3fcf-4126-ba12-1c8d33af1a96", 00:06:55.700 "strip_size_kb": 64, 00:06:55.700 "state": "configuring", 00:06:55.700 "raid_level": "concat", 00:06:55.700 "superblock": true, 00:06:55.700 "num_base_bdevs": 2, 00:06:55.700 "num_base_bdevs_discovered": 0, 00:06:55.700 "num_base_bdevs_operational": 2, 00:06:55.700 "base_bdevs_list": [ 00:06:55.700 { 00:06:55.700 "name": "BaseBdev1", 00:06:55.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.700 "is_configured": false, 00:06:55.700 "data_offset": 0, 00:06:55.700 "data_size": 0 00:06:55.700 }, 00:06:55.700 { 00:06:55.700 "name": "BaseBdev2", 00:06:55.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.700 "is_configured": false, 00:06:55.700 "data_offset": 0, 00:06:55.700 "data_size": 0 00:06:55.700 } 00:06:55.700 ] 00:06:55.700 }' 00:06:55.700 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.700 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.959 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:55.959 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.959 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.959 [2024-11-26 13:19:44.496554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:55.959 [2024-11-26 13:19:44.496588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:55.959 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.959 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:55.959 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.959 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.959 [2024-11-26 13:19:44.504567] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:55.959 [2024-11-26 13:19:44.504644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:55.959 [2024-11-26 13:19:44.504657] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:55.959 [2024-11-26 13:19:44.504673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:55.959 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.959 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:55.959 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.959 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.218 [2024-11-26 13:19:44.543678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:56.218 BaseBdev1 00:06:56.218 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.218 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:56.218 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:56.218 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:56.218 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:56.218 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:56.218 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:56.218 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:56.218 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.218 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.218 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.218 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:56.218 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.218 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.218 [ 00:06:56.218 { 00:06:56.218 "name": "BaseBdev1", 00:06:56.218 "aliases": [ 00:06:56.218 "cbec083a-e00c-4cd1-a55f-49286928c6c1" 00:06:56.218 ], 00:06:56.218 "product_name": "Malloc disk", 00:06:56.218 "block_size": 512, 00:06:56.218 "num_blocks": 65536, 00:06:56.218 "uuid": "cbec083a-e00c-4cd1-a55f-49286928c6c1", 00:06:56.218 "assigned_rate_limits": { 00:06:56.218 "rw_ios_per_sec": 0, 00:06:56.218 "rw_mbytes_per_sec": 0, 00:06:56.218 "r_mbytes_per_sec": 0, 00:06:56.218 "w_mbytes_per_sec": 0 00:06:56.218 }, 00:06:56.218 "claimed": true, 00:06:56.218 "claim_type": "exclusive_write", 00:06:56.218 "zoned": false, 00:06:56.218 "supported_io_types": { 00:06:56.218 "read": true, 00:06:56.218 "write": true, 00:06:56.218 "unmap": true, 00:06:56.218 "flush": true, 00:06:56.218 "reset": true, 00:06:56.218 "nvme_admin": false, 00:06:56.218 "nvme_io": false, 00:06:56.218 "nvme_io_md": false, 00:06:56.219 "write_zeroes": true, 00:06:56.219 "zcopy": true, 00:06:56.219 "get_zone_info": false, 00:06:56.219 "zone_management": false, 00:06:56.219 "zone_append": false, 00:06:56.219 "compare": false, 00:06:56.219 "compare_and_write": false, 00:06:56.219 "abort": true, 00:06:56.219 "seek_hole": false, 00:06:56.219 "seek_data": false, 00:06:56.219 "copy": true, 00:06:56.219 "nvme_iov_md": false 00:06:56.219 }, 00:06:56.219 "memory_domains": [ 00:06:56.219 { 00:06:56.219 "dma_device_id": "system", 00:06:56.219 "dma_device_type": 1 00:06:56.219 }, 00:06:56.219 { 00:06:56.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.219 "dma_device_type": 2 00:06:56.219 } 00:06:56.219 ], 00:06:56.219 "driver_specific": {} 00:06:56.219 } 00:06:56.219 ] 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.219 "name": "Existed_Raid", 00:06:56.219 "uuid": "c9bb1347-0ce8-4968-bcab-eaa7737fc190", 00:06:56.219 "strip_size_kb": 64, 00:06:56.219 "state": "configuring", 00:06:56.219 "raid_level": "concat", 00:06:56.219 "superblock": true, 00:06:56.219 "num_base_bdevs": 2, 00:06:56.219 "num_base_bdevs_discovered": 1, 00:06:56.219 "num_base_bdevs_operational": 2, 00:06:56.219 "base_bdevs_list": [ 00:06:56.219 { 00:06:56.219 "name": "BaseBdev1", 00:06:56.219 "uuid": "cbec083a-e00c-4cd1-a55f-49286928c6c1", 00:06:56.219 "is_configured": true, 00:06:56.219 "data_offset": 2048, 00:06:56.219 "data_size": 63488 00:06:56.219 }, 00:06:56.219 { 00:06:56.219 "name": "BaseBdev2", 00:06:56.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.219 "is_configured": false, 00:06:56.219 "data_offset": 0, 00:06:56.219 "data_size": 0 00:06:56.219 } 00:06:56.219 ] 00:06:56.219 }' 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.219 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.787 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:56.787 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.787 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.787 [2024-11-26 13:19:45.083845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:56.787 [2024-11-26 13:19:45.083884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:56.787 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.787 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:56.787 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.788 [2024-11-26 13:19:45.095914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:56.788 [2024-11-26 13:19:45.098505] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:56.788 [2024-11-26 13:19:45.098761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.788 "name": "Existed_Raid", 00:06:56.788 "uuid": "c649be6f-05d8-4628-bfff-0d51f7866465", 00:06:56.788 "strip_size_kb": 64, 00:06:56.788 "state": "configuring", 00:06:56.788 "raid_level": "concat", 00:06:56.788 "superblock": true, 00:06:56.788 "num_base_bdevs": 2, 00:06:56.788 "num_base_bdevs_discovered": 1, 00:06:56.788 "num_base_bdevs_operational": 2, 00:06:56.788 "base_bdevs_list": [ 00:06:56.788 { 00:06:56.788 "name": "BaseBdev1", 00:06:56.788 "uuid": "cbec083a-e00c-4cd1-a55f-49286928c6c1", 00:06:56.788 "is_configured": true, 00:06:56.788 "data_offset": 2048, 00:06:56.788 "data_size": 63488 00:06:56.788 }, 00:06:56.788 { 00:06:56.788 "name": "BaseBdev2", 00:06:56.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.788 "is_configured": false, 00:06:56.788 "data_offset": 0, 00:06:56.788 "data_size": 0 00:06:56.788 } 00:06:56.788 ] 00:06:56.788 }' 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.788 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.046 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:57.046 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.046 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.305 [2024-11-26 13:19:45.626123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:57.305 [2024-11-26 13:19:45.626613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:57.305 [2024-11-26 13:19:45.626638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:57.305 BaseBdev2 00:06:57.305 [2024-11-26 13:19:45.627009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:57.305 [2024-11-26 13:19:45.627237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:57.305 [2024-11-26 13:19:45.627264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:57.305 [2024-11-26 13:19:45.627440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.305 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.305 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:57.305 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:57.305 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:57.305 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:57.305 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:57.305 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:57.305 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:57.305 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.305 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.305 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.306 [ 00:06:57.306 { 00:06:57.306 "name": "BaseBdev2", 00:06:57.306 "aliases": [ 00:06:57.306 "298b6df6-b869-4098-8f8c-f74406458ede" 00:06:57.306 ], 00:06:57.306 "product_name": "Malloc disk", 00:06:57.306 "block_size": 512, 00:06:57.306 "num_blocks": 65536, 00:06:57.306 "uuid": "298b6df6-b869-4098-8f8c-f74406458ede", 00:06:57.306 "assigned_rate_limits": { 00:06:57.306 "rw_ios_per_sec": 0, 00:06:57.306 "rw_mbytes_per_sec": 0, 00:06:57.306 "r_mbytes_per_sec": 0, 00:06:57.306 "w_mbytes_per_sec": 0 00:06:57.306 }, 00:06:57.306 "claimed": true, 00:06:57.306 "claim_type": "exclusive_write", 00:06:57.306 "zoned": false, 00:06:57.306 "supported_io_types": { 00:06:57.306 "read": true, 00:06:57.306 "write": true, 00:06:57.306 "unmap": true, 00:06:57.306 "flush": true, 00:06:57.306 "reset": true, 00:06:57.306 "nvme_admin": false, 00:06:57.306 "nvme_io": false, 00:06:57.306 "nvme_io_md": false, 00:06:57.306 "write_zeroes": true, 00:06:57.306 "zcopy": true, 00:06:57.306 "get_zone_info": false, 00:06:57.306 "zone_management": false, 00:06:57.306 "zone_append": false, 00:06:57.306 "compare": false, 00:06:57.306 "compare_and_write": false, 00:06:57.306 "abort": true, 00:06:57.306 "seek_hole": false, 00:06:57.306 "seek_data": false, 00:06:57.306 "copy": true, 00:06:57.306 "nvme_iov_md": false 00:06:57.306 }, 00:06:57.306 "memory_domains": [ 00:06:57.306 { 00:06:57.306 "dma_device_id": "system", 00:06:57.306 "dma_device_type": 1 00:06:57.306 }, 00:06:57.306 { 00:06:57.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.306 "dma_device_type": 2 00:06:57.306 } 00:06:57.306 ], 00:06:57.306 "driver_specific": {} 00:06:57.306 } 00:06:57.306 ] 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.306 "name": "Existed_Raid", 00:06:57.306 "uuid": "c649be6f-05d8-4628-bfff-0d51f7866465", 00:06:57.306 "strip_size_kb": 64, 00:06:57.306 "state": "online", 00:06:57.306 "raid_level": "concat", 00:06:57.306 "superblock": true, 00:06:57.306 "num_base_bdevs": 2, 00:06:57.306 "num_base_bdevs_discovered": 2, 00:06:57.306 "num_base_bdevs_operational": 2, 00:06:57.306 "base_bdevs_list": [ 00:06:57.306 { 00:06:57.306 "name": "BaseBdev1", 00:06:57.306 "uuid": "cbec083a-e00c-4cd1-a55f-49286928c6c1", 00:06:57.306 "is_configured": true, 00:06:57.306 "data_offset": 2048, 00:06:57.306 "data_size": 63488 00:06:57.306 }, 00:06:57.306 { 00:06:57.306 "name": "BaseBdev2", 00:06:57.306 "uuid": "298b6df6-b869-4098-8f8c-f74406458ede", 00:06:57.306 "is_configured": true, 00:06:57.306 "data_offset": 2048, 00:06:57.306 "data_size": 63488 00:06:57.306 } 00:06:57.306 ] 00:06:57.306 }' 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.306 13:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.874 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:57.874 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:57.874 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:57.874 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:57.874 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:57.874 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:57.874 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:57.874 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.874 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.874 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:57.874 [2024-11-26 13:19:46.174664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.874 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.874 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:57.874 "name": "Existed_Raid", 00:06:57.874 "aliases": [ 00:06:57.874 "c649be6f-05d8-4628-bfff-0d51f7866465" 00:06:57.874 ], 00:06:57.874 "product_name": "Raid Volume", 00:06:57.874 "block_size": 512, 00:06:57.874 "num_blocks": 126976, 00:06:57.874 "uuid": "c649be6f-05d8-4628-bfff-0d51f7866465", 00:06:57.874 "assigned_rate_limits": { 00:06:57.874 "rw_ios_per_sec": 0, 00:06:57.874 "rw_mbytes_per_sec": 0, 00:06:57.874 "r_mbytes_per_sec": 0, 00:06:57.874 "w_mbytes_per_sec": 0 00:06:57.874 }, 00:06:57.874 "claimed": false, 00:06:57.874 "zoned": false, 00:06:57.874 "supported_io_types": { 00:06:57.874 "read": true, 00:06:57.874 "write": true, 00:06:57.874 "unmap": true, 00:06:57.874 "flush": true, 00:06:57.874 "reset": true, 00:06:57.874 "nvme_admin": false, 00:06:57.874 "nvme_io": false, 00:06:57.874 "nvme_io_md": false, 00:06:57.874 "write_zeroes": true, 00:06:57.874 "zcopy": false, 00:06:57.874 "get_zone_info": false, 00:06:57.874 "zone_management": false, 00:06:57.874 "zone_append": false, 00:06:57.874 "compare": false, 00:06:57.874 "compare_and_write": false, 00:06:57.874 "abort": false, 00:06:57.874 "seek_hole": false, 00:06:57.874 "seek_data": false, 00:06:57.874 "copy": false, 00:06:57.874 "nvme_iov_md": false 00:06:57.874 }, 00:06:57.874 "memory_domains": [ 00:06:57.874 { 00:06:57.874 "dma_device_id": "system", 00:06:57.874 "dma_device_type": 1 00:06:57.874 }, 00:06:57.874 { 00:06:57.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.874 "dma_device_type": 2 00:06:57.874 }, 00:06:57.874 { 00:06:57.874 "dma_device_id": "system", 00:06:57.874 "dma_device_type": 1 00:06:57.874 }, 00:06:57.874 { 00:06:57.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.874 "dma_device_type": 2 00:06:57.874 } 00:06:57.874 ], 00:06:57.874 "driver_specific": { 00:06:57.874 "raid": { 00:06:57.874 "uuid": "c649be6f-05d8-4628-bfff-0d51f7866465", 00:06:57.874 "strip_size_kb": 64, 00:06:57.874 "state": "online", 00:06:57.874 "raid_level": "concat", 00:06:57.874 "superblock": true, 00:06:57.874 "num_base_bdevs": 2, 00:06:57.874 "num_base_bdevs_discovered": 2, 00:06:57.874 "num_base_bdevs_operational": 2, 00:06:57.874 "base_bdevs_list": [ 00:06:57.874 { 00:06:57.874 "name": "BaseBdev1", 00:06:57.874 "uuid": "cbec083a-e00c-4cd1-a55f-49286928c6c1", 00:06:57.874 "is_configured": true, 00:06:57.874 "data_offset": 2048, 00:06:57.874 "data_size": 63488 00:06:57.874 }, 00:06:57.874 { 00:06:57.874 "name": "BaseBdev2", 00:06:57.874 "uuid": "298b6df6-b869-4098-8f8c-f74406458ede", 00:06:57.874 "is_configured": true, 00:06:57.874 "data_offset": 2048, 00:06:57.874 "data_size": 63488 00:06:57.874 } 00:06:57.875 ] 00:06:57.875 } 00:06:57.875 } 00:06:57.875 }' 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:57.875 BaseBdev2' 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.875 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.133 [2024-11-26 13:19:46.438523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:58.133 [2024-11-26 13:19:46.438561] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:58.133 [2024-11-26 13:19:46.438657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.133 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.133 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:58.133 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:58.133 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:58.133 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:58.133 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:58.133 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:58.133 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.133 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:58.133 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:58.133 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.133 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:58.133 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.133 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.133 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.134 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.134 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.134 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.134 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.134 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.134 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.134 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.134 "name": "Existed_Raid", 00:06:58.134 "uuid": "c649be6f-05d8-4628-bfff-0d51f7866465", 00:06:58.134 "strip_size_kb": 64, 00:06:58.134 "state": "offline", 00:06:58.134 "raid_level": "concat", 00:06:58.134 "superblock": true, 00:06:58.134 "num_base_bdevs": 2, 00:06:58.134 "num_base_bdevs_discovered": 1, 00:06:58.134 "num_base_bdevs_operational": 1, 00:06:58.134 "base_bdevs_list": [ 00:06:58.134 { 00:06:58.134 "name": null, 00:06:58.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.134 "is_configured": false, 00:06:58.134 "data_offset": 0, 00:06:58.134 "data_size": 63488 00:06:58.134 }, 00:06:58.134 { 00:06:58.134 "name": "BaseBdev2", 00:06:58.134 "uuid": "298b6df6-b869-4098-8f8c-f74406458ede", 00:06:58.134 "is_configured": true, 00:06:58.134 "data_offset": 2048, 00:06:58.134 "data_size": 63488 00:06:58.134 } 00:06:58.134 ] 00:06:58.134 }' 00:06:58.134 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.134 13:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.700 [2024-11-26 13:19:47.087835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:58.700 [2024-11-26 13:19:47.087890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61462 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61462 ']' 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61462 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61462 00:06:58.700 killing process with pid 61462 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61462' 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61462 00:06:58.700 [2024-11-26 13:19:47.244183] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.700 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61462 00:06:58.700 [2024-11-26 13:19:47.257355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.636 13:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:59.636 00:06:59.636 real 0m5.202s 00:06:59.636 user 0m7.990s 00:06:59.636 sys 0m0.766s 00:06:59.636 ************************************ 00:06:59.636 END TEST raid_state_function_test_sb 00:06:59.636 ************************************ 00:06:59.636 13:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.636 13:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.636 13:19:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:06:59.636 13:19:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:59.636 13:19:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.636 13:19:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.636 ************************************ 00:06:59.636 START TEST raid_superblock_test 00:06:59.636 ************************************ 00:06:59.636 13:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:06:59.636 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:06:59.636 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:59.636 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:59.636 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:59.636 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:59.636 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61715 00:06:59.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61715 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61715 ']' 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.637 13:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.896 [2024-11-26 13:19:48.278948] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:59.896 [2024-11-26 13:19:48.279143] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61715 ] 00:07:00.156 [2024-11-26 13:19:48.461313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.156 [2024-11-26 13:19:48.575608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.415 [2024-11-26 13:19:48.744402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.415 [2024-11-26 13:19:48.744740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.982 malloc1 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.982 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.982 [2024-11-26 13:19:49.287575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:00.982 [2024-11-26 13:19:49.287645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.983 [2024-11-26 13:19:49.287677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:00.983 [2024-11-26 13:19:49.287691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.983 [2024-11-26 13:19:49.290325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.983 [2024-11-26 13:19:49.290366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:00.983 pt1 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.983 malloc2 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.983 [2024-11-26 13:19:49.333400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:00.983 [2024-11-26 13:19:49.333474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.983 [2024-11-26 13:19:49.333503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:00.983 [2024-11-26 13:19:49.333515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.983 [2024-11-26 13:19:49.336041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.983 [2024-11-26 13:19:49.336291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:00.983 pt2 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.983 [2024-11-26 13:19:49.345493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:00.983 [2024-11-26 13:19:49.347818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:00.983 [2024-11-26 13:19:49.348152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:00.983 [2024-11-26 13:19:49.348310] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:00.983 [2024-11-26 13:19:49.348672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:00.983 [2024-11-26 13:19:49.348973] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:00.983 [2024-11-26 13:19:49.349101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:00.983 [2024-11-26 13:19:49.349437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.983 "name": "raid_bdev1", 00:07:00.983 "uuid": "cf08aa1c-cbd7-4783-b217-f8944a5e5961", 00:07:00.983 "strip_size_kb": 64, 00:07:00.983 "state": "online", 00:07:00.983 "raid_level": "concat", 00:07:00.983 "superblock": true, 00:07:00.983 "num_base_bdevs": 2, 00:07:00.983 "num_base_bdevs_discovered": 2, 00:07:00.983 "num_base_bdevs_operational": 2, 00:07:00.983 "base_bdevs_list": [ 00:07:00.983 { 00:07:00.983 "name": "pt1", 00:07:00.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:00.983 "is_configured": true, 00:07:00.983 "data_offset": 2048, 00:07:00.983 "data_size": 63488 00:07:00.983 }, 00:07:00.983 { 00:07:00.983 "name": "pt2", 00:07:00.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:00.983 "is_configured": true, 00:07:00.983 "data_offset": 2048, 00:07:00.983 "data_size": 63488 00:07:00.983 } 00:07:00.983 ] 00:07:00.983 }' 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.983 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.551 [2024-11-26 13:19:49.857857] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:01.551 "name": "raid_bdev1", 00:07:01.551 "aliases": [ 00:07:01.551 "cf08aa1c-cbd7-4783-b217-f8944a5e5961" 00:07:01.551 ], 00:07:01.551 "product_name": "Raid Volume", 00:07:01.551 "block_size": 512, 00:07:01.551 "num_blocks": 126976, 00:07:01.551 "uuid": "cf08aa1c-cbd7-4783-b217-f8944a5e5961", 00:07:01.551 "assigned_rate_limits": { 00:07:01.551 "rw_ios_per_sec": 0, 00:07:01.551 "rw_mbytes_per_sec": 0, 00:07:01.551 "r_mbytes_per_sec": 0, 00:07:01.551 "w_mbytes_per_sec": 0 00:07:01.551 }, 00:07:01.551 "claimed": false, 00:07:01.551 "zoned": false, 00:07:01.551 "supported_io_types": { 00:07:01.551 "read": true, 00:07:01.551 "write": true, 00:07:01.551 "unmap": true, 00:07:01.551 "flush": true, 00:07:01.551 "reset": true, 00:07:01.551 "nvme_admin": false, 00:07:01.551 "nvme_io": false, 00:07:01.551 "nvme_io_md": false, 00:07:01.551 "write_zeroes": true, 00:07:01.551 "zcopy": false, 00:07:01.551 "get_zone_info": false, 00:07:01.551 "zone_management": false, 00:07:01.551 "zone_append": false, 00:07:01.551 "compare": false, 00:07:01.551 "compare_and_write": false, 00:07:01.551 "abort": false, 00:07:01.551 "seek_hole": false, 00:07:01.551 "seek_data": false, 00:07:01.551 "copy": false, 00:07:01.551 "nvme_iov_md": false 00:07:01.551 }, 00:07:01.551 "memory_domains": [ 00:07:01.551 { 00:07:01.551 "dma_device_id": "system", 00:07:01.551 "dma_device_type": 1 00:07:01.551 }, 00:07:01.551 { 00:07:01.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.551 "dma_device_type": 2 00:07:01.551 }, 00:07:01.551 { 00:07:01.551 "dma_device_id": "system", 00:07:01.551 "dma_device_type": 1 00:07:01.551 }, 00:07:01.551 { 00:07:01.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.551 "dma_device_type": 2 00:07:01.551 } 00:07:01.551 ], 00:07:01.551 "driver_specific": { 00:07:01.551 "raid": { 00:07:01.551 "uuid": "cf08aa1c-cbd7-4783-b217-f8944a5e5961", 00:07:01.551 "strip_size_kb": 64, 00:07:01.551 "state": "online", 00:07:01.551 "raid_level": "concat", 00:07:01.551 "superblock": true, 00:07:01.551 "num_base_bdevs": 2, 00:07:01.551 "num_base_bdevs_discovered": 2, 00:07:01.551 "num_base_bdevs_operational": 2, 00:07:01.551 "base_bdevs_list": [ 00:07:01.551 { 00:07:01.551 "name": "pt1", 00:07:01.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:01.551 "is_configured": true, 00:07:01.551 "data_offset": 2048, 00:07:01.551 "data_size": 63488 00:07:01.551 }, 00:07:01.551 { 00:07:01.551 "name": "pt2", 00:07:01.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:01.551 "is_configured": true, 00:07:01.551 "data_offset": 2048, 00:07:01.551 "data_size": 63488 00:07:01.551 } 00:07:01.551 ] 00:07:01.551 } 00:07:01.551 } 00:07:01.551 }' 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:01.551 pt2' 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:01.551 13:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.551 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.811 [2024-11-26 13:19:50.117948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cf08aa1c-cbd7-4783-b217-f8944a5e5961 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cf08aa1c-cbd7-4783-b217-f8944a5e5961 ']' 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.811 [2024-11-26 13:19:50.169674] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:01.811 [2024-11-26 13:19:50.169697] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:01.811 [2024-11-26 13:19:50.169767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.811 [2024-11-26 13:19:50.169814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.811 [2024-11-26 13:19:50.169833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.811 [2024-11-26 13:19:50.309752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:01.811 [2024-11-26 13:19:50.311924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:01.811 [2024-11-26 13:19:50.311988] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:01.811 [2024-11-26 13:19:50.312051] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:01.811 [2024-11-26 13:19:50.312074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:01.811 [2024-11-26 13:19:50.312088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:01.811 request: 00:07:01.811 { 00:07:01.811 "name": "raid_bdev1", 00:07:01.811 "raid_level": "concat", 00:07:01.811 "base_bdevs": [ 00:07:01.811 "malloc1", 00:07:01.811 "malloc2" 00:07:01.811 ], 00:07:01.811 "strip_size_kb": 64, 00:07:01.811 "superblock": false, 00:07:01.811 "method": "bdev_raid_create", 00:07:01.811 "req_id": 1 00:07:01.811 } 00:07:01.811 Got JSON-RPC error response 00:07:01.811 response: 00:07:01.811 { 00:07:01.811 "code": -17, 00:07:01.811 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:01.811 } 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.811 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.071 [2024-11-26 13:19:50.377742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:02.071 [2024-11-26 13:19:50.377798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.071 [2024-11-26 13:19:50.377820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:02.071 [2024-11-26 13:19:50.377834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.071 [2024-11-26 13:19:50.380570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.071 [2024-11-26 13:19:50.380650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:02.071 [2024-11-26 13:19:50.380722] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:02.071 [2024-11-26 13:19:50.380786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:02.071 pt1 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.071 "name": "raid_bdev1", 00:07:02.071 "uuid": "cf08aa1c-cbd7-4783-b217-f8944a5e5961", 00:07:02.071 "strip_size_kb": 64, 00:07:02.071 "state": "configuring", 00:07:02.071 "raid_level": "concat", 00:07:02.071 "superblock": true, 00:07:02.071 "num_base_bdevs": 2, 00:07:02.071 "num_base_bdevs_discovered": 1, 00:07:02.071 "num_base_bdevs_operational": 2, 00:07:02.071 "base_bdevs_list": [ 00:07:02.071 { 00:07:02.071 "name": "pt1", 00:07:02.071 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:02.071 "is_configured": true, 00:07:02.071 "data_offset": 2048, 00:07:02.071 "data_size": 63488 00:07:02.071 }, 00:07:02.071 { 00:07:02.071 "name": null, 00:07:02.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:02.071 "is_configured": false, 00:07:02.071 "data_offset": 2048, 00:07:02.071 "data_size": 63488 00:07:02.071 } 00:07:02.071 ] 00:07:02.071 }' 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.071 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.330 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:02.330 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:02.330 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:02.330 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:02.330 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.330 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.330 [2024-11-26 13:19:50.889889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:02.330 [2024-11-26 13:19:50.889991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.330 [2024-11-26 13:19:50.890016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:02.330 [2024-11-26 13:19:50.890030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.330 [2024-11-26 13:19:50.890515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.330 [2024-11-26 13:19:50.890551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:02.330 [2024-11-26 13:19:50.890663] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:02.330 [2024-11-26 13:19:50.890691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:02.330 [2024-11-26 13:19:50.890851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:02.330 [2024-11-26 13:19:50.890877] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:02.330 [2024-11-26 13:19:50.891136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:02.330 [2024-11-26 13:19:50.891417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:02.330 [2024-11-26 13:19:50.891543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:02.330 [2024-11-26 13:19:50.891794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.330 pt2 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.589 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.589 "name": "raid_bdev1", 00:07:02.590 "uuid": "cf08aa1c-cbd7-4783-b217-f8944a5e5961", 00:07:02.590 "strip_size_kb": 64, 00:07:02.590 "state": "online", 00:07:02.590 "raid_level": "concat", 00:07:02.590 "superblock": true, 00:07:02.590 "num_base_bdevs": 2, 00:07:02.590 "num_base_bdevs_discovered": 2, 00:07:02.590 "num_base_bdevs_operational": 2, 00:07:02.590 "base_bdevs_list": [ 00:07:02.590 { 00:07:02.590 "name": "pt1", 00:07:02.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:02.590 "is_configured": true, 00:07:02.590 "data_offset": 2048, 00:07:02.590 "data_size": 63488 00:07:02.590 }, 00:07:02.590 { 00:07:02.590 "name": "pt2", 00:07:02.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:02.590 "is_configured": true, 00:07:02.590 "data_offset": 2048, 00:07:02.590 "data_size": 63488 00:07:02.590 } 00:07:02.590 ] 00:07:02.590 }' 00:07:02.590 13:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.590 13:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.849 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:02.849 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:02.849 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.108 [2024-11-26 13:19:51.422330] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:03.108 "name": "raid_bdev1", 00:07:03.108 "aliases": [ 00:07:03.108 "cf08aa1c-cbd7-4783-b217-f8944a5e5961" 00:07:03.108 ], 00:07:03.108 "product_name": "Raid Volume", 00:07:03.108 "block_size": 512, 00:07:03.108 "num_blocks": 126976, 00:07:03.108 "uuid": "cf08aa1c-cbd7-4783-b217-f8944a5e5961", 00:07:03.108 "assigned_rate_limits": { 00:07:03.108 "rw_ios_per_sec": 0, 00:07:03.108 "rw_mbytes_per_sec": 0, 00:07:03.108 "r_mbytes_per_sec": 0, 00:07:03.108 "w_mbytes_per_sec": 0 00:07:03.108 }, 00:07:03.108 "claimed": false, 00:07:03.108 "zoned": false, 00:07:03.108 "supported_io_types": { 00:07:03.108 "read": true, 00:07:03.108 "write": true, 00:07:03.108 "unmap": true, 00:07:03.108 "flush": true, 00:07:03.108 "reset": true, 00:07:03.108 "nvme_admin": false, 00:07:03.108 "nvme_io": false, 00:07:03.108 "nvme_io_md": false, 00:07:03.108 "write_zeroes": true, 00:07:03.108 "zcopy": false, 00:07:03.108 "get_zone_info": false, 00:07:03.108 "zone_management": false, 00:07:03.108 "zone_append": false, 00:07:03.108 "compare": false, 00:07:03.108 "compare_and_write": false, 00:07:03.108 "abort": false, 00:07:03.108 "seek_hole": false, 00:07:03.108 "seek_data": false, 00:07:03.108 "copy": false, 00:07:03.108 "nvme_iov_md": false 00:07:03.108 }, 00:07:03.108 "memory_domains": [ 00:07:03.108 { 00:07:03.108 "dma_device_id": "system", 00:07:03.108 "dma_device_type": 1 00:07:03.108 }, 00:07:03.108 { 00:07:03.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.108 "dma_device_type": 2 00:07:03.108 }, 00:07:03.108 { 00:07:03.108 "dma_device_id": "system", 00:07:03.108 "dma_device_type": 1 00:07:03.108 }, 00:07:03.108 { 00:07:03.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.108 "dma_device_type": 2 00:07:03.108 } 00:07:03.108 ], 00:07:03.108 "driver_specific": { 00:07:03.108 "raid": { 00:07:03.108 "uuid": "cf08aa1c-cbd7-4783-b217-f8944a5e5961", 00:07:03.108 "strip_size_kb": 64, 00:07:03.108 "state": "online", 00:07:03.108 "raid_level": "concat", 00:07:03.108 "superblock": true, 00:07:03.108 "num_base_bdevs": 2, 00:07:03.108 "num_base_bdevs_discovered": 2, 00:07:03.108 "num_base_bdevs_operational": 2, 00:07:03.108 "base_bdevs_list": [ 00:07:03.108 { 00:07:03.108 "name": "pt1", 00:07:03.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:03.108 "is_configured": true, 00:07:03.108 "data_offset": 2048, 00:07:03.108 "data_size": 63488 00:07:03.108 }, 00:07:03.108 { 00:07:03.108 "name": "pt2", 00:07:03.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:03.108 "is_configured": true, 00:07:03.108 "data_offset": 2048, 00:07:03.108 "data_size": 63488 00:07:03.108 } 00:07:03.108 ] 00:07:03.108 } 00:07:03.108 } 00:07:03.108 }' 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:03.108 pt2' 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.108 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.108 [2024-11-26 13:19:51.662343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.367 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.367 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cf08aa1c-cbd7-4783-b217-f8944a5e5961 '!=' cf08aa1c-cbd7-4783-b217-f8944a5e5961 ']' 00:07:03.367 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:03.367 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:03.367 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:03.367 13:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61715 00:07:03.367 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61715 ']' 00:07:03.367 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61715 00:07:03.367 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:03.367 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.367 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61715 00:07:03.367 killing process with pid 61715 00:07:03.367 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.367 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.367 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61715' 00:07:03.367 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61715 00:07:03.367 [2024-11-26 13:19:51.745680] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.367 [2024-11-26 13:19:51.745743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.367 [2024-11-26 13:19:51.745787] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.368 [2024-11-26 13:19:51.745804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:03.368 13:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61715 00:07:03.368 [2024-11-26 13:19:51.886598] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.306 ************************************ 00:07:04.306 END TEST raid_superblock_test 00:07:04.306 ************************************ 00:07:04.306 13:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:04.306 00:07:04.306 real 0m4.565s 00:07:04.306 user 0m6.863s 00:07:04.306 sys 0m0.691s 00:07:04.306 13:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.306 13:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.306 13:19:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:04.306 13:19:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:04.306 13:19:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.306 13:19:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.306 ************************************ 00:07:04.306 START TEST raid_read_error_test 00:07:04.306 ************************************ 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rrhL4u420I 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61931 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61931 00:07:04.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61931 ']' 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.306 13:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.565 [2024-11-26 13:19:52.906512] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:07:04.565 [2024-11-26 13:19:52.906985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61931 ] 00:07:04.565 [2024-11-26 13:19:53.090074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.824 [2024-11-26 13:19:53.188657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.824 [2024-11-26 13:19:53.361336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.824 [2024-11-26 13:19:53.361519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.392 BaseBdev1_malloc 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.392 true 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.392 [2024-11-26 13:19:53.880432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:05.392 [2024-11-26 13:19:53.880676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.392 [2024-11-26 13:19:53.880713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:05.392 [2024-11-26 13:19:53.880731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.392 [2024-11-26 13:19:53.883372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.392 [2024-11-26 13:19:53.883435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:05.392 BaseBdev1 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.392 BaseBdev2_malloc 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.392 true 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.392 [2024-11-26 13:19:53.930543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:05.392 [2024-11-26 13:19:53.930600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.392 [2024-11-26 13:19:53.930624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:05.392 [2024-11-26 13:19:53.930638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.392 [2024-11-26 13:19:53.933173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.392 [2024-11-26 13:19:53.933218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:05.392 BaseBdev2 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.392 [2024-11-26 13:19:53.938629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:05.392 [2024-11-26 13:19:53.940889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:05.392 [2024-11-26 13:19:53.941112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:05.392 [2024-11-26 13:19:53.941133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:05.392 [2024-11-26 13:19:53.941405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:05.392 [2024-11-26 13:19:53.941606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:05.392 [2024-11-26 13:19:53.941624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:05.392 [2024-11-26 13:19:53.941804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.392 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.651 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.651 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.651 "name": "raid_bdev1", 00:07:05.652 "uuid": "121364f7-ee04-4446-b660-65b2b8634630", 00:07:05.652 "strip_size_kb": 64, 00:07:05.652 "state": "online", 00:07:05.652 "raid_level": "concat", 00:07:05.652 "superblock": true, 00:07:05.652 "num_base_bdevs": 2, 00:07:05.652 "num_base_bdevs_discovered": 2, 00:07:05.652 "num_base_bdevs_operational": 2, 00:07:05.652 "base_bdevs_list": [ 00:07:05.652 { 00:07:05.652 "name": "BaseBdev1", 00:07:05.652 "uuid": "69a953ba-00c4-5846-a078-91ea5581d29a", 00:07:05.652 "is_configured": true, 00:07:05.652 "data_offset": 2048, 00:07:05.652 "data_size": 63488 00:07:05.652 }, 00:07:05.652 { 00:07:05.652 "name": "BaseBdev2", 00:07:05.652 "uuid": "bf4335cc-25fa-50bf-b4b6-8418d37216c9", 00:07:05.652 "is_configured": true, 00:07:05.652 "data_offset": 2048, 00:07:05.652 "data_size": 63488 00:07:05.652 } 00:07:05.652 ] 00:07:05.652 }' 00:07:05.652 13:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.652 13:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.911 13:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:05.911 13:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:06.170 [2024-11-26 13:19:54.571850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.107 "name": "raid_bdev1", 00:07:07.107 "uuid": "121364f7-ee04-4446-b660-65b2b8634630", 00:07:07.107 "strip_size_kb": 64, 00:07:07.107 "state": "online", 00:07:07.107 "raid_level": "concat", 00:07:07.107 "superblock": true, 00:07:07.107 "num_base_bdevs": 2, 00:07:07.107 "num_base_bdevs_discovered": 2, 00:07:07.107 "num_base_bdevs_operational": 2, 00:07:07.107 "base_bdevs_list": [ 00:07:07.107 { 00:07:07.107 "name": "BaseBdev1", 00:07:07.107 "uuid": "69a953ba-00c4-5846-a078-91ea5581d29a", 00:07:07.107 "is_configured": true, 00:07:07.107 "data_offset": 2048, 00:07:07.107 "data_size": 63488 00:07:07.107 }, 00:07:07.107 { 00:07:07.107 "name": "BaseBdev2", 00:07:07.107 "uuid": "bf4335cc-25fa-50bf-b4b6-8418d37216c9", 00:07:07.107 "is_configured": true, 00:07:07.107 "data_offset": 2048, 00:07:07.107 "data_size": 63488 00:07:07.107 } 00:07:07.107 ] 00:07:07.107 }' 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.107 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.675 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:07.675 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.675 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.675 [2024-11-26 13:19:55.955835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:07.675 [2024-11-26 13:19:55.956018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:07.675 [2024-11-26 13:19:55.959472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.675 [2024-11-26 13:19:55.959741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.675 [2024-11-26 13:19:55.959795] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.675 [2024-11-26 13:19:55.959817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:07.675 { 00:07:07.675 "results": [ 00:07:07.675 { 00:07:07.675 "job": "raid_bdev1", 00:07:07.675 "core_mask": "0x1", 00:07:07.675 "workload": "randrw", 00:07:07.675 "percentage": 50, 00:07:07.675 "status": "finished", 00:07:07.675 "queue_depth": 1, 00:07:07.675 "io_size": 131072, 00:07:07.675 "runtime": 1.382196, 00:07:07.675 "iops": 13856.934906482149, 00:07:07.675 "mibps": 1732.1168633102686, 00:07:07.675 "io_failed": 1, 00:07:07.675 "io_timeout": 0, 00:07:07.675 "avg_latency_us": 100.62420609984147, 00:07:07.675 "min_latency_us": 34.67636363636364, 00:07:07.675 "max_latency_us": 1511.7963636363636 00:07:07.675 } 00:07:07.675 ], 00:07:07.675 "core_count": 1 00:07:07.675 } 00:07:07.675 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.675 13:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61931 00:07:07.675 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61931 ']' 00:07:07.675 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61931 00:07:07.675 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:07.675 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.675 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61931 00:07:07.675 killing process with pid 61931 00:07:07.675 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.675 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.675 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61931' 00:07:07.675 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61931 00:07:07.675 [2024-11-26 13:19:55.996743] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:07.675 13:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61931 00:07:07.675 [2024-11-26 13:19:56.090438] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.613 13:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rrhL4u420I 00:07:08.613 13:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:08.613 13:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:08.613 13:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:08.613 13:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:08.613 13:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:08.613 13:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:08.613 13:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:08.613 00:07:08.613 real 0m4.188s 00:07:08.613 user 0m5.287s 00:07:08.613 sys 0m0.530s 00:07:08.613 13:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.613 13:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.613 ************************************ 00:07:08.613 END TEST raid_read_error_test 00:07:08.613 ************************************ 00:07:08.613 13:19:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:08.613 13:19:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:08.613 13:19:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.613 13:19:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.613 ************************************ 00:07:08.613 START TEST raid_write_error_test 00:07:08.613 ************************************ 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zUk2V46b4Q 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62071 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62071 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62071 ']' 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.613 13:19:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.613 [2024-11-26 13:19:57.142669] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:07:08.613 [2024-11-26 13:19:57.142857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62071 ] 00:07:08.872 [2024-11-26 13:19:57.323654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.872 [2024-11-26 13:19:57.434615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.130 [2024-11-26 13:19:57.602078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.130 [2024-11-26 13:19:57.602123] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.699 BaseBdev1_malloc 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.699 true 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.699 [2024-11-26 13:19:58.153986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:09.699 [2024-11-26 13:19:58.154082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.699 [2024-11-26 13:19:58.154109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:09.699 [2024-11-26 13:19:58.154127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.699 [2024-11-26 13:19:58.156757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.699 [2024-11-26 13:19:58.156799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:09.699 BaseBdev1 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.699 BaseBdev2_malloc 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.699 true 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.699 [2024-11-26 13:19:58.204288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:09.699 [2024-11-26 13:19:58.204374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.699 [2024-11-26 13:19:58.204399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:09.699 [2024-11-26 13:19:58.204416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.699 [2024-11-26 13:19:58.206990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.699 [2024-11-26 13:19:58.207068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:09.699 BaseBdev2 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.699 [2024-11-26 13:19:58.212366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:09.699 [2024-11-26 13:19:58.214678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:09.699 [2024-11-26 13:19:58.214919] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:09.699 [2024-11-26 13:19:58.214971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:09.699 [2024-11-26 13:19:58.215257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:09.699 [2024-11-26 13:19:58.215530] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:09.699 [2024-11-26 13:19:58.215559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:09.699 [2024-11-26 13:19:58.215765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.699 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.958 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.958 "name": "raid_bdev1", 00:07:09.958 "uuid": "d1743352-d457-4bff-8263-7babaff306a9", 00:07:09.958 "strip_size_kb": 64, 00:07:09.958 "state": "online", 00:07:09.958 "raid_level": "concat", 00:07:09.958 "superblock": true, 00:07:09.958 "num_base_bdevs": 2, 00:07:09.958 "num_base_bdevs_discovered": 2, 00:07:09.958 "num_base_bdevs_operational": 2, 00:07:09.958 "base_bdevs_list": [ 00:07:09.958 { 00:07:09.958 "name": "BaseBdev1", 00:07:09.958 "uuid": "25755d6b-3c67-5833-ae3f-a9eeaffe492c", 00:07:09.958 "is_configured": true, 00:07:09.958 "data_offset": 2048, 00:07:09.958 "data_size": 63488 00:07:09.958 }, 00:07:09.958 { 00:07:09.958 "name": "BaseBdev2", 00:07:09.958 "uuid": "e5c69333-1703-541e-93b2-a17262b3bb37", 00:07:09.958 "is_configured": true, 00:07:09.958 "data_offset": 2048, 00:07:09.959 "data_size": 63488 00:07:09.959 } 00:07:09.959 ] 00:07:09.959 }' 00:07:09.959 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.959 13:19:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.218 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:10.218 13:19:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:10.477 [2024-11-26 13:19:58.857603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.414 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.414 "name": "raid_bdev1", 00:07:11.414 "uuid": "d1743352-d457-4bff-8263-7babaff306a9", 00:07:11.414 "strip_size_kb": 64, 00:07:11.414 "state": "online", 00:07:11.414 "raid_level": "concat", 00:07:11.414 "superblock": true, 00:07:11.414 "num_base_bdevs": 2, 00:07:11.414 "num_base_bdevs_discovered": 2, 00:07:11.414 "num_base_bdevs_operational": 2, 00:07:11.414 "base_bdevs_list": [ 00:07:11.415 { 00:07:11.415 "name": "BaseBdev1", 00:07:11.415 "uuid": "25755d6b-3c67-5833-ae3f-a9eeaffe492c", 00:07:11.415 "is_configured": true, 00:07:11.415 "data_offset": 2048, 00:07:11.415 "data_size": 63488 00:07:11.415 }, 00:07:11.415 { 00:07:11.415 "name": "BaseBdev2", 00:07:11.415 "uuid": "e5c69333-1703-541e-93b2-a17262b3bb37", 00:07:11.415 "is_configured": true, 00:07:11.415 "data_offset": 2048, 00:07:11.415 "data_size": 63488 00:07:11.415 } 00:07:11.415 ] 00:07:11.415 }' 00:07:11.415 13:19:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.415 13:19:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.071 13:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:12.071 13:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.071 13:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.071 [2024-11-26 13:20:00.281482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:12.071 [2024-11-26 13:20:00.281541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.071 [2024-11-26 13:20:00.284576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.071 [2024-11-26 13:20:00.284647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.071 [2024-11-26 13:20:00.284685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.071 [2024-11-26 13:20:00.284711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:12.071 { 00:07:12.071 "results": [ 00:07:12.071 { 00:07:12.071 "job": "raid_bdev1", 00:07:12.071 "core_mask": "0x1", 00:07:12.071 "workload": "randrw", 00:07:12.071 "percentage": 50, 00:07:12.071 "status": "finished", 00:07:12.071 "queue_depth": 1, 00:07:12.071 "io_size": 131072, 00:07:12.071 "runtime": 1.42181, 00:07:12.071 "iops": 13650.206427019082, 00:07:12.071 "mibps": 1706.2758033773853, 00:07:12.071 "io_failed": 1, 00:07:12.071 "io_timeout": 0, 00:07:12.071 "avg_latency_us": 101.8555088314231, 00:07:12.071 "min_latency_us": 34.21090909090909, 00:07:12.071 "max_latency_us": 1645.8472727272726 00:07:12.071 } 00:07:12.071 ], 00:07:12.071 "core_count": 1 00:07:12.071 } 00:07:12.071 13:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.071 13:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62071 00:07:12.071 13:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62071 ']' 00:07:12.071 13:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62071 00:07:12.071 13:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:12.071 13:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.071 13:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62071 00:07:12.071 13:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.071 killing process with pid 62071 00:07:12.071 13:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.071 13:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62071' 00:07:12.071 13:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62071 00:07:12.071 [2024-11-26 13:20:00.323530] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.071 13:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62071 00:07:12.071 [2024-11-26 13:20:00.415962] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.103 13:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zUk2V46b4Q 00:07:13.103 13:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:13.103 13:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:13.103 13:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:07:13.103 13:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:13.103 13:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:13.103 13:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:13.103 13:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:07:13.103 00:07:13.103 real 0m4.413s 00:07:13.103 user 0m5.603s 00:07:13.103 sys 0m0.532s 00:07:13.103 13:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.103 13:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.103 ************************************ 00:07:13.103 END TEST raid_write_error_test 00:07:13.103 ************************************ 00:07:13.103 13:20:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:13.103 13:20:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:13.103 13:20:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:13.103 13:20:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.103 13:20:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.103 ************************************ 00:07:13.103 START TEST raid_state_function_test 00:07:13.103 ************************************ 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62215 00:07:13.103 Process raid pid: 62215 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62215' 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62215 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62215 ']' 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.103 13:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.104 13:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.104 13:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.104 13:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.104 [2024-11-26 13:20:01.604972] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:07:13.104 [2024-11-26 13:20:01.605166] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.363 [2024-11-26 13:20:01.784857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.363 [2024-11-26 13:20:01.895905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.622 [2024-11-26 13:20:02.087114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.622 [2024-11-26 13:20:02.087165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.190 [2024-11-26 13:20:02.492930] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.190 [2024-11-26 13:20:02.492993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.190 [2024-11-26 13:20:02.493009] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.190 [2024-11-26 13:20:02.493023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.190 "name": "Existed_Raid", 00:07:14.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.190 "strip_size_kb": 0, 00:07:14.190 "state": "configuring", 00:07:14.190 "raid_level": "raid1", 00:07:14.190 "superblock": false, 00:07:14.190 "num_base_bdevs": 2, 00:07:14.190 "num_base_bdevs_discovered": 0, 00:07:14.190 "num_base_bdevs_operational": 2, 00:07:14.190 "base_bdevs_list": [ 00:07:14.190 { 00:07:14.190 "name": "BaseBdev1", 00:07:14.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.190 "is_configured": false, 00:07:14.190 "data_offset": 0, 00:07:14.190 "data_size": 0 00:07:14.190 }, 00:07:14.190 { 00:07:14.190 "name": "BaseBdev2", 00:07:14.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.190 "is_configured": false, 00:07:14.190 "data_offset": 0, 00:07:14.190 "data_size": 0 00:07:14.190 } 00:07:14.190 ] 00:07:14.190 }' 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.190 13:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.449 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:14.449 13:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.449 13:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.449 [2024-11-26 13:20:03.000969] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.449 [2024-11-26 13:20:03.001004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:14.449 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.449 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.449 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.449 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.449 [2024-11-26 13:20:03.008960] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.449 [2024-11-26 13:20:03.008999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.449 [2024-11-26 13:20:03.009011] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.449 [2024-11-26 13:20:03.009028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.449 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.449 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.709 [2024-11-26 13:20:03.051944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.709 BaseBdev1 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.709 [ 00:07:14.709 { 00:07:14.709 "name": "BaseBdev1", 00:07:14.709 "aliases": [ 00:07:14.709 "6d930869-f75a-4900-be0a-13ec64119dab" 00:07:14.709 ], 00:07:14.709 "product_name": "Malloc disk", 00:07:14.709 "block_size": 512, 00:07:14.709 "num_blocks": 65536, 00:07:14.709 "uuid": "6d930869-f75a-4900-be0a-13ec64119dab", 00:07:14.709 "assigned_rate_limits": { 00:07:14.709 "rw_ios_per_sec": 0, 00:07:14.709 "rw_mbytes_per_sec": 0, 00:07:14.709 "r_mbytes_per_sec": 0, 00:07:14.709 "w_mbytes_per_sec": 0 00:07:14.709 }, 00:07:14.709 "claimed": true, 00:07:14.709 "claim_type": "exclusive_write", 00:07:14.709 "zoned": false, 00:07:14.709 "supported_io_types": { 00:07:14.709 "read": true, 00:07:14.709 "write": true, 00:07:14.709 "unmap": true, 00:07:14.709 "flush": true, 00:07:14.709 "reset": true, 00:07:14.709 "nvme_admin": false, 00:07:14.709 "nvme_io": false, 00:07:14.709 "nvme_io_md": false, 00:07:14.709 "write_zeroes": true, 00:07:14.709 "zcopy": true, 00:07:14.709 "get_zone_info": false, 00:07:14.709 "zone_management": false, 00:07:14.709 "zone_append": false, 00:07:14.709 "compare": false, 00:07:14.709 "compare_and_write": false, 00:07:14.709 "abort": true, 00:07:14.709 "seek_hole": false, 00:07:14.709 "seek_data": false, 00:07:14.709 "copy": true, 00:07:14.709 "nvme_iov_md": false 00:07:14.709 }, 00:07:14.709 "memory_domains": [ 00:07:14.709 { 00:07:14.709 "dma_device_id": "system", 00:07:14.709 "dma_device_type": 1 00:07:14.709 }, 00:07:14.709 { 00:07:14.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.709 "dma_device_type": 2 00:07:14.709 } 00:07:14.709 ], 00:07:14.709 "driver_specific": {} 00:07:14.709 } 00:07:14.709 ] 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.709 "name": "Existed_Raid", 00:07:14.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.709 "strip_size_kb": 0, 00:07:14.709 "state": "configuring", 00:07:14.709 "raid_level": "raid1", 00:07:14.709 "superblock": false, 00:07:14.709 "num_base_bdevs": 2, 00:07:14.709 "num_base_bdevs_discovered": 1, 00:07:14.709 "num_base_bdevs_operational": 2, 00:07:14.709 "base_bdevs_list": [ 00:07:14.709 { 00:07:14.709 "name": "BaseBdev1", 00:07:14.709 "uuid": "6d930869-f75a-4900-be0a-13ec64119dab", 00:07:14.709 "is_configured": true, 00:07:14.709 "data_offset": 0, 00:07:14.709 "data_size": 65536 00:07:14.709 }, 00:07:14.709 { 00:07:14.709 "name": "BaseBdev2", 00:07:14.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.709 "is_configured": false, 00:07:14.709 "data_offset": 0, 00:07:14.709 "data_size": 0 00:07:14.709 } 00:07:14.709 ] 00:07:14.709 }' 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.709 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.278 [2024-11-26 13:20:03.612059] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:15.278 [2024-11-26 13:20:03.612099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.278 [2024-11-26 13:20:03.620109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.278 [2024-11-26 13:20:03.622186] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.278 [2024-11-26 13:20:03.622254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.278 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.279 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.279 "name": "Existed_Raid", 00:07:15.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.279 "strip_size_kb": 0, 00:07:15.279 "state": "configuring", 00:07:15.279 "raid_level": "raid1", 00:07:15.279 "superblock": false, 00:07:15.279 "num_base_bdevs": 2, 00:07:15.279 "num_base_bdevs_discovered": 1, 00:07:15.279 "num_base_bdevs_operational": 2, 00:07:15.279 "base_bdevs_list": [ 00:07:15.279 { 00:07:15.279 "name": "BaseBdev1", 00:07:15.279 "uuid": "6d930869-f75a-4900-be0a-13ec64119dab", 00:07:15.279 "is_configured": true, 00:07:15.279 "data_offset": 0, 00:07:15.279 "data_size": 65536 00:07:15.279 }, 00:07:15.279 { 00:07:15.279 "name": "BaseBdev2", 00:07:15.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.279 "is_configured": false, 00:07:15.279 "data_offset": 0, 00:07:15.279 "data_size": 0 00:07:15.279 } 00:07:15.279 ] 00:07:15.279 }' 00:07:15.279 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.279 13:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.847 [2024-11-26 13:20:04.179955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.847 [2024-11-26 13:20:04.180006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:15.847 [2024-11-26 13:20:04.180018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:15.847 [2024-11-26 13:20:04.180327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:15.847 [2024-11-26 13:20:04.180529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:15.847 [2024-11-26 13:20:04.180550] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:15.847 [2024-11-26 13:20:04.180807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.847 BaseBdev2 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.847 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.847 [ 00:07:15.847 { 00:07:15.847 "name": "BaseBdev2", 00:07:15.847 "aliases": [ 00:07:15.847 "87eb36c5-f7e1-435d-8a89-374d505ce55d" 00:07:15.847 ], 00:07:15.847 "product_name": "Malloc disk", 00:07:15.847 "block_size": 512, 00:07:15.847 "num_blocks": 65536, 00:07:15.847 "uuid": "87eb36c5-f7e1-435d-8a89-374d505ce55d", 00:07:15.847 "assigned_rate_limits": { 00:07:15.847 "rw_ios_per_sec": 0, 00:07:15.847 "rw_mbytes_per_sec": 0, 00:07:15.847 "r_mbytes_per_sec": 0, 00:07:15.847 "w_mbytes_per_sec": 0 00:07:15.847 }, 00:07:15.847 "claimed": true, 00:07:15.847 "claim_type": "exclusive_write", 00:07:15.847 "zoned": false, 00:07:15.847 "supported_io_types": { 00:07:15.847 "read": true, 00:07:15.847 "write": true, 00:07:15.847 "unmap": true, 00:07:15.847 "flush": true, 00:07:15.847 "reset": true, 00:07:15.847 "nvme_admin": false, 00:07:15.847 "nvme_io": false, 00:07:15.847 "nvme_io_md": false, 00:07:15.847 "write_zeroes": true, 00:07:15.847 "zcopy": true, 00:07:15.847 "get_zone_info": false, 00:07:15.847 "zone_management": false, 00:07:15.847 "zone_append": false, 00:07:15.847 "compare": false, 00:07:15.847 "compare_and_write": false, 00:07:15.848 "abort": true, 00:07:15.848 "seek_hole": false, 00:07:15.848 "seek_data": false, 00:07:15.848 "copy": true, 00:07:15.848 "nvme_iov_md": false 00:07:15.848 }, 00:07:15.848 "memory_domains": [ 00:07:15.848 { 00:07:15.848 "dma_device_id": "system", 00:07:15.848 "dma_device_type": 1 00:07:15.848 }, 00:07:15.848 { 00:07:15.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.848 "dma_device_type": 2 00:07:15.848 } 00:07:15.848 ], 00:07:15.848 "driver_specific": {} 00:07:15.848 } 00:07:15.848 ] 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.848 "name": "Existed_Raid", 00:07:15.848 "uuid": "7b861070-74cc-4612-8851-0cf8a7462662", 00:07:15.848 "strip_size_kb": 0, 00:07:15.848 "state": "online", 00:07:15.848 "raid_level": "raid1", 00:07:15.848 "superblock": false, 00:07:15.848 "num_base_bdevs": 2, 00:07:15.848 "num_base_bdevs_discovered": 2, 00:07:15.848 "num_base_bdevs_operational": 2, 00:07:15.848 "base_bdevs_list": [ 00:07:15.848 { 00:07:15.848 "name": "BaseBdev1", 00:07:15.848 "uuid": "6d930869-f75a-4900-be0a-13ec64119dab", 00:07:15.848 "is_configured": true, 00:07:15.848 "data_offset": 0, 00:07:15.848 "data_size": 65536 00:07:15.848 }, 00:07:15.848 { 00:07:15.848 "name": "BaseBdev2", 00:07:15.848 "uuid": "87eb36c5-f7e1-435d-8a89-374d505ce55d", 00:07:15.848 "is_configured": true, 00:07:15.848 "data_offset": 0, 00:07:15.848 "data_size": 65536 00:07:15.848 } 00:07:15.848 ] 00:07:15.848 }' 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.848 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.414 [2024-11-26 13:20:04.736363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:16.414 "name": "Existed_Raid", 00:07:16.414 "aliases": [ 00:07:16.414 "7b861070-74cc-4612-8851-0cf8a7462662" 00:07:16.414 ], 00:07:16.414 "product_name": "Raid Volume", 00:07:16.414 "block_size": 512, 00:07:16.414 "num_blocks": 65536, 00:07:16.414 "uuid": "7b861070-74cc-4612-8851-0cf8a7462662", 00:07:16.414 "assigned_rate_limits": { 00:07:16.414 "rw_ios_per_sec": 0, 00:07:16.414 "rw_mbytes_per_sec": 0, 00:07:16.414 "r_mbytes_per_sec": 0, 00:07:16.414 "w_mbytes_per_sec": 0 00:07:16.414 }, 00:07:16.414 "claimed": false, 00:07:16.414 "zoned": false, 00:07:16.414 "supported_io_types": { 00:07:16.414 "read": true, 00:07:16.414 "write": true, 00:07:16.414 "unmap": false, 00:07:16.414 "flush": false, 00:07:16.414 "reset": true, 00:07:16.414 "nvme_admin": false, 00:07:16.414 "nvme_io": false, 00:07:16.414 "nvme_io_md": false, 00:07:16.414 "write_zeroes": true, 00:07:16.414 "zcopy": false, 00:07:16.414 "get_zone_info": false, 00:07:16.414 "zone_management": false, 00:07:16.414 "zone_append": false, 00:07:16.414 "compare": false, 00:07:16.414 "compare_and_write": false, 00:07:16.414 "abort": false, 00:07:16.414 "seek_hole": false, 00:07:16.414 "seek_data": false, 00:07:16.414 "copy": false, 00:07:16.414 "nvme_iov_md": false 00:07:16.414 }, 00:07:16.414 "memory_domains": [ 00:07:16.414 { 00:07:16.414 "dma_device_id": "system", 00:07:16.414 "dma_device_type": 1 00:07:16.414 }, 00:07:16.414 { 00:07:16.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.414 "dma_device_type": 2 00:07:16.414 }, 00:07:16.414 { 00:07:16.414 "dma_device_id": "system", 00:07:16.414 "dma_device_type": 1 00:07:16.414 }, 00:07:16.414 { 00:07:16.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.414 "dma_device_type": 2 00:07:16.414 } 00:07:16.414 ], 00:07:16.414 "driver_specific": { 00:07:16.414 "raid": { 00:07:16.414 "uuid": "7b861070-74cc-4612-8851-0cf8a7462662", 00:07:16.414 "strip_size_kb": 0, 00:07:16.414 "state": "online", 00:07:16.414 "raid_level": "raid1", 00:07:16.414 "superblock": false, 00:07:16.414 "num_base_bdevs": 2, 00:07:16.414 "num_base_bdevs_discovered": 2, 00:07:16.414 "num_base_bdevs_operational": 2, 00:07:16.414 "base_bdevs_list": [ 00:07:16.414 { 00:07:16.414 "name": "BaseBdev1", 00:07:16.414 "uuid": "6d930869-f75a-4900-be0a-13ec64119dab", 00:07:16.414 "is_configured": true, 00:07:16.414 "data_offset": 0, 00:07:16.414 "data_size": 65536 00:07:16.414 }, 00:07:16.414 { 00:07:16.414 "name": "BaseBdev2", 00:07:16.414 "uuid": "87eb36c5-f7e1-435d-8a89-374d505ce55d", 00:07:16.414 "is_configured": true, 00:07:16.414 "data_offset": 0, 00:07:16.414 "data_size": 65536 00:07:16.414 } 00:07:16.414 ] 00:07:16.414 } 00:07:16.414 } 00:07:16.414 }' 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:16.414 BaseBdev2' 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:16.414 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.415 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.415 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.674 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.674 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.674 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:16.674 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.674 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.674 [2024-11-26 13:20:04.996195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.674 "name": "Existed_Raid", 00:07:16.674 "uuid": "7b861070-74cc-4612-8851-0cf8a7462662", 00:07:16.674 "strip_size_kb": 0, 00:07:16.674 "state": "online", 00:07:16.674 "raid_level": "raid1", 00:07:16.674 "superblock": false, 00:07:16.674 "num_base_bdevs": 2, 00:07:16.674 "num_base_bdevs_discovered": 1, 00:07:16.674 "num_base_bdevs_operational": 1, 00:07:16.674 "base_bdevs_list": [ 00:07:16.674 { 00:07:16.674 "name": null, 00:07:16.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.674 "is_configured": false, 00:07:16.674 "data_offset": 0, 00:07:16.674 "data_size": 65536 00:07:16.674 }, 00:07:16.674 { 00:07:16.674 "name": "BaseBdev2", 00:07:16.674 "uuid": "87eb36c5-f7e1-435d-8a89-374d505ce55d", 00:07:16.674 "is_configured": true, 00:07:16.674 "data_offset": 0, 00:07:16.674 "data_size": 65536 00:07:16.674 } 00:07:16.674 ] 00:07:16.674 }' 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.674 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.243 [2024-11-26 13:20:05.643652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:17.243 [2024-11-26 13:20:05.643764] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.243 [2024-11-26 13:20:05.711309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.243 [2024-11-26 13:20:05.711363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.243 [2024-11-26 13:20:05.711381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62215 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62215 ']' 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62215 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62215 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62215' 00:07:17.243 killing process with pid 62215 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62215 00:07:17.243 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62215 00:07:17.243 [2024-11-26 13:20:05.799030] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.503 [2024-11-26 13:20:05.817338] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.439 13:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:18.439 00:07:18.439 real 0m5.183s 00:07:18.440 user 0m7.929s 00:07:18.440 sys 0m0.769s 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.440 ************************************ 00:07:18.440 END TEST raid_state_function_test 00:07:18.440 ************************************ 00:07:18.440 13:20:06 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:18.440 13:20:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:18.440 13:20:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.440 13:20:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.440 ************************************ 00:07:18.440 START TEST raid_state_function_test_sb 00:07:18.440 ************************************ 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62468 00:07:18.440 Process raid pid: 62468 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62468' 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62468 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62468 ']' 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.440 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.440 [2024-11-26 13:20:06.842526] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:07:18.440 [2024-11-26 13:20:06.842702] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.699 [2024-11-26 13:20:07.023927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.699 [2024-11-26 13:20:07.123928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.956 [2024-11-26 13:20:07.293662] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.956 [2024-11-26 13:20:07.293702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.214 [2024-11-26 13:20:07.688915] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.214 [2024-11-26 13:20:07.688968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.214 [2024-11-26 13:20:07.688982] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.214 [2024-11-26 13:20:07.688996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.214 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.215 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.215 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.215 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.215 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.215 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.215 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.215 "name": "Existed_Raid", 00:07:19.215 "uuid": "ec8eb111-5ca2-409f-9483-8035554d0718", 00:07:19.215 "strip_size_kb": 0, 00:07:19.215 "state": "configuring", 00:07:19.215 "raid_level": "raid1", 00:07:19.215 "superblock": true, 00:07:19.215 "num_base_bdevs": 2, 00:07:19.215 "num_base_bdevs_discovered": 0, 00:07:19.215 "num_base_bdevs_operational": 2, 00:07:19.215 "base_bdevs_list": [ 00:07:19.215 { 00:07:19.215 "name": "BaseBdev1", 00:07:19.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.215 "is_configured": false, 00:07:19.215 "data_offset": 0, 00:07:19.215 "data_size": 0 00:07:19.215 }, 00:07:19.215 { 00:07:19.215 "name": "BaseBdev2", 00:07:19.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.215 "is_configured": false, 00:07:19.215 "data_offset": 0, 00:07:19.215 "data_size": 0 00:07:19.215 } 00:07:19.215 ] 00:07:19.215 }' 00:07:19.215 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.215 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.782 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:19.782 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.782 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.782 [2024-11-26 13:20:08.212946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:19.782 [2024-11-26 13:20:08.212977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:19.782 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.782 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.782 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.782 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.782 [2024-11-26 13:20:08.220952] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.782 [2024-11-26 13:20:08.220994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.782 [2024-11-26 13:20:08.221006] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.782 [2024-11-26 13:20:08.221022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.782 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.782 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:19.782 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.782 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.783 [2024-11-26 13:20:08.259329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:19.783 BaseBdev1 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.783 [ 00:07:19.783 { 00:07:19.783 "name": "BaseBdev1", 00:07:19.783 "aliases": [ 00:07:19.783 "7a935c40-5cae-4761-8fee-cba361ef5d09" 00:07:19.783 ], 00:07:19.783 "product_name": "Malloc disk", 00:07:19.783 "block_size": 512, 00:07:19.783 "num_blocks": 65536, 00:07:19.783 "uuid": "7a935c40-5cae-4761-8fee-cba361ef5d09", 00:07:19.783 "assigned_rate_limits": { 00:07:19.783 "rw_ios_per_sec": 0, 00:07:19.783 "rw_mbytes_per_sec": 0, 00:07:19.783 "r_mbytes_per_sec": 0, 00:07:19.783 "w_mbytes_per_sec": 0 00:07:19.783 }, 00:07:19.783 "claimed": true, 00:07:19.783 "claim_type": "exclusive_write", 00:07:19.783 "zoned": false, 00:07:19.783 "supported_io_types": { 00:07:19.783 "read": true, 00:07:19.783 "write": true, 00:07:19.783 "unmap": true, 00:07:19.783 "flush": true, 00:07:19.783 "reset": true, 00:07:19.783 "nvme_admin": false, 00:07:19.783 "nvme_io": false, 00:07:19.783 "nvme_io_md": false, 00:07:19.783 "write_zeroes": true, 00:07:19.783 "zcopy": true, 00:07:19.783 "get_zone_info": false, 00:07:19.783 "zone_management": false, 00:07:19.783 "zone_append": false, 00:07:19.783 "compare": false, 00:07:19.783 "compare_and_write": false, 00:07:19.783 "abort": true, 00:07:19.783 "seek_hole": false, 00:07:19.783 "seek_data": false, 00:07:19.783 "copy": true, 00:07:19.783 "nvme_iov_md": false 00:07:19.783 }, 00:07:19.783 "memory_domains": [ 00:07:19.783 { 00:07:19.783 "dma_device_id": "system", 00:07:19.783 "dma_device_type": 1 00:07:19.783 }, 00:07:19.783 { 00:07:19.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.783 "dma_device_type": 2 00:07:19.783 } 00:07:19.783 ], 00:07:19.783 "driver_specific": {} 00:07:19.783 } 00:07:19.783 ] 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.783 "name": "Existed_Raid", 00:07:19.783 "uuid": "3e2e4809-4396-449f-88c1-37890c61e7d4", 00:07:19.783 "strip_size_kb": 0, 00:07:19.783 "state": "configuring", 00:07:19.783 "raid_level": "raid1", 00:07:19.783 "superblock": true, 00:07:19.783 "num_base_bdevs": 2, 00:07:19.783 "num_base_bdevs_discovered": 1, 00:07:19.783 "num_base_bdevs_operational": 2, 00:07:19.783 "base_bdevs_list": [ 00:07:19.783 { 00:07:19.783 "name": "BaseBdev1", 00:07:19.783 "uuid": "7a935c40-5cae-4761-8fee-cba361ef5d09", 00:07:19.783 "is_configured": true, 00:07:19.783 "data_offset": 2048, 00:07:19.783 "data_size": 63488 00:07:19.783 }, 00:07:19.783 { 00:07:19.783 "name": "BaseBdev2", 00:07:19.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.783 "is_configured": false, 00:07:19.783 "data_offset": 0, 00:07:19.783 "data_size": 0 00:07:19.783 } 00:07:19.783 ] 00:07:19.783 }' 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.783 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.365 [2024-11-26 13:20:08.811510] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:20.365 [2024-11-26 13:20:08.811554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.365 [2024-11-26 13:20:08.819571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.365 [2024-11-26 13:20:08.821726] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.365 [2024-11-26 13:20:08.821793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.365 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.366 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.366 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.366 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.366 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.366 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.366 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.366 "name": "Existed_Raid", 00:07:20.366 "uuid": "0d8780ff-5e22-4503-836a-e08c48986d5f", 00:07:20.366 "strip_size_kb": 0, 00:07:20.366 "state": "configuring", 00:07:20.366 "raid_level": "raid1", 00:07:20.366 "superblock": true, 00:07:20.366 "num_base_bdevs": 2, 00:07:20.366 "num_base_bdevs_discovered": 1, 00:07:20.366 "num_base_bdevs_operational": 2, 00:07:20.366 "base_bdevs_list": [ 00:07:20.366 { 00:07:20.366 "name": "BaseBdev1", 00:07:20.366 "uuid": "7a935c40-5cae-4761-8fee-cba361ef5d09", 00:07:20.366 "is_configured": true, 00:07:20.366 "data_offset": 2048, 00:07:20.366 "data_size": 63488 00:07:20.366 }, 00:07:20.366 { 00:07:20.366 "name": "BaseBdev2", 00:07:20.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.366 "is_configured": false, 00:07:20.366 "data_offset": 0, 00:07:20.366 "data_size": 0 00:07:20.366 } 00:07:20.366 ] 00:07:20.366 }' 00:07:20.366 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.366 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.932 [2024-11-26 13:20:09.372593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:20.932 [2024-11-26 13:20:09.372858] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:20.932 [2024-11-26 13:20:09.372874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:20.932 [2024-11-26 13:20:09.373216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:20.932 [2024-11-26 13:20:09.373427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:20.932 [2024-11-26 13:20:09.373454] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:20.932 [2024-11-26 13:20:09.373623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.932 BaseBdev2 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.932 [ 00:07:20.932 { 00:07:20.932 "name": "BaseBdev2", 00:07:20.932 "aliases": [ 00:07:20.932 "e2631628-d36a-4d80-8a77-a687a36d2753" 00:07:20.932 ], 00:07:20.932 "product_name": "Malloc disk", 00:07:20.932 "block_size": 512, 00:07:20.932 "num_blocks": 65536, 00:07:20.932 "uuid": "e2631628-d36a-4d80-8a77-a687a36d2753", 00:07:20.932 "assigned_rate_limits": { 00:07:20.932 "rw_ios_per_sec": 0, 00:07:20.932 "rw_mbytes_per_sec": 0, 00:07:20.932 "r_mbytes_per_sec": 0, 00:07:20.932 "w_mbytes_per_sec": 0 00:07:20.932 }, 00:07:20.932 "claimed": true, 00:07:20.932 "claim_type": "exclusive_write", 00:07:20.932 "zoned": false, 00:07:20.932 "supported_io_types": { 00:07:20.932 "read": true, 00:07:20.932 "write": true, 00:07:20.932 "unmap": true, 00:07:20.932 "flush": true, 00:07:20.932 "reset": true, 00:07:20.932 "nvme_admin": false, 00:07:20.932 "nvme_io": false, 00:07:20.932 "nvme_io_md": false, 00:07:20.932 "write_zeroes": true, 00:07:20.932 "zcopy": true, 00:07:20.932 "get_zone_info": false, 00:07:20.932 "zone_management": false, 00:07:20.932 "zone_append": false, 00:07:20.932 "compare": false, 00:07:20.932 "compare_and_write": false, 00:07:20.932 "abort": true, 00:07:20.932 "seek_hole": false, 00:07:20.932 "seek_data": false, 00:07:20.932 "copy": true, 00:07:20.932 "nvme_iov_md": false 00:07:20.932 }, 00:07:20.932 "memory_domains": [ 00:07:20.932 { 00:07:20.932 "dma_device_id": "system", 00:07:20.932 "dma_device_type": 1 00:07:20.932 }, 00:07:20.932 { 00:07:20.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.932 "dma_device_type": 2 00:07:20.932 } 00:07:20.932 ], 00:07:20.932 "driver_specific": {} 00:07:20.932 } 00:07:20.932 ] 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.932 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.932 "name": "Existed_Raid", 00:07:20.932 "uuid": "0d8780ff-5e22-4503-836a-e08c48986d5f", 00:07:20.932 "strip_size_kb": 0, 00:07:20.932 "state": "online", 00:07:20.932 "raid_level": "raid1", 00:07:20.932 "superblock": true, 00:07:20.932 "num_base_bdevs": 2, 00:07:20.932 "num_base_bdevs_discovered": 2, 00:07:20.932 "num_base_bdevs_operational": 2, 00:07:20.932 "base_bdevs_list": [ 00:07:20.932 { 00:07:20.932 "name": "BaseBdev1", 00:07:20.933 "uuid": "7a935c40-5cae-4761-8fee-cba361ef5d09", 00:07:20.933 "is_configured": true, 00:07:20.933 "data_offset": 2048, 00:07:20.933 "data_size": 63488 00:07:20.933 }, 00:07:20.933 { 00:07:20.933 "name": "BaseBdev2", 00:07:20.933 "uuid": "e2631628-d36a-4d80-8a77-a687a36d2753", 00:07:20.933 "is_configured": true, 00:07:20.933 "data_offset": 2048, 00:07:20.933 "data_size": 63488 00:07:20.933 } 00:07:20.933 ] 00:07:20.933 }' 00:07:20.933 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.933 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.499 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:21.499 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:21.499 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:21.499 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:21.499 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:21.499 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:21.499 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:21.499 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:21.499 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.499 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.499 [2024-11-26 13:20:09.937109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.499 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.499 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.499 "name": "Existed_Raid", 00:07:21.499 "aliases": [ 00:07:21.500 "0d8780ff-5e22-4503-836a-e08c48986d5f" 00:07:21.500 ], 00:07:21.500 "product_name": "Raid Volume", 00:07:21.500 "block_size": 512, 00:07:21.500 "num_blocks": 63488, 00:07:21.500 "uuid": "0d8780ff-5e22-4503-836a-e08c48986d5f", 00:07:21.500 "assigned_rate_limits": { 00:07:21.500 "rw_ios_per_sec": 0, 00:07:21.500 "rw_mbytes_per_sec": 0, 00:07:21.500 "r_mbytes_per_sec": 0, 00:07:21.500 "w_mbytes_per_sec": 0 00:07:21.500 }, 00:07:21.500 "claimed": false, 00:07:21.500 "zoned": false, 00:07:21.500 "supported_io_types": { 00:07:21.500 "read": true, 00:07:21.500 "write": true, 00:07:21.500 "unmap": false, 00:07:21.500 "flush": false, 00:07:21.500 "reset": true, 00:07:21.500 "nvme_admin": false, 00:07:21.500 "nvme_io": false, 00:07:21.500 "nvme_io_md": false, 00:07:21.500 "write_zeroes": true, 00:07:21.500 "zcopy": false, 00:07:21.500 "get_zone_info": false, 00:07:21.500 "zone_management": false, 00:07:21.500 "zone_append": false, 00:07:21.500 "compare": false, 00:07:21.500 "compare_and_write": false, 00:07:21.500 "abort": false, 00:07:21.500 "seek_hole": false, 00:07:21.500 "seek_data": false, 00:07:21.500 "copy": false, 00:07:21.500 "nvme_iov_md": false 00:07:21.500 }, 00:07:21.500 "memory_domains": [ 00:07:21.500 { 00:07:21.500 "dma_device_id": "system", 00:07:21.500 "dma_device_type": 1 00:07:21.500 }, 00:07:21.500 { 00:07:21.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.500 "dma_device_type": 2 00:07:21.500 }, 00:07:21.500 { 00:07:21.500 "dma_device_id": "system", 00:07:21.500 "dma_device_type": 1 00:07:21.500 }, 00:07:21.500 { 00:07:21.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.500 "dma_device_type": 2 00:07:21.500 } 00:07:21.500 ], 00:07:21.500 "driver_specific": { 00:07:21.500 "raid": { 00:07:21.500 "uuid": "0d8780ff-5e22-4503-836a-e08c48986d5f", 00:07:21.500 "strip_size_kb": 0, 00:07:21.500 "state": "online", 00:07:21.500 "raid_level": "raid1", 00:07:21.500 "superblock": true, 00:07:21.500 "num_base_bdevs": 2, 00:07:21.500 "num_base_bdevs_discovered": 2, 00:07:21.500 "num_base_bdevs_operational": 2, 00:07:21.500 "base_bdevs_list": [ 00:07:21.500 { 00:07:21.500 "name": "BaseBdev1", 00:07:21.500 "uuid": "7a935c40-5cae-4761-8fee-cba361ef5d09", 00:07:21.500 "is_configured": true, 00:07:21.500 "data_offset": 2048, 00:07:21.500 "data_size": 63488 00:07:21.500 }, 00:07:21.500 { 00:07:21.500 "name": "BaseBdev2", 00:07:21.500 "uuid": "e2631628-d36a-4d80-8a77-a687a36d2753", 00:07:21.500 "is_configured": true, 00:07:21.500 "data_offset": 2048, 00:07:21.500 "data_size": 63488 00:07:21.500 } 00:07:21.500 ] 00:07:21.500 } 00:07:21.500 } 00:07:21.500 }' 00:07:21.500 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:21.500 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:21.500 BaseBdev2' 00:07:21.500 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.758 [2024-11-26 13:20:10.204929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.758 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.016 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.016 "name": "Existed_Raid", 00:07:22.016 "uuid": "0d8780ff-5e22-4503-836a-e08c48986d5f", 00:07:22.016 "strip_size_kb": 0, 00:07:22.016 "state": "online", 00:07:22.016 "raid_level": "raid1", 00:07:22.016 "superblock": true, 00:07:22.016 "num_base_bdevs": 2, 00:07:22.016 "num_base_bdevs_discovered": 1, 00:07:22.016 "num_base_bdevs_operational": 1, 00:07:22.016 "base_bdevs_list": [ 00:07:22.016 { 00:07:22.016 "name": null, 00:07:22.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.016 "is_configured": false, 00:07:22.016 "data_offset": 0, 00:07:22.016 "data_size": 63488 00:07:22.016 }, 00:07:22.016 { 00:07:22.016 "name": "BaseBdev2", 00:07:22.016 "uuid": "e2631628-d36a-4d80-8a77-a687a36d2753", 00:07:22.016 "is_configured": true, 00:07:22.016 "data_offset": 2048, 00:07:22.016 "data_size": 63488 00:07:22.016 } 00:07:22.016 ] 00:07:22.016 }' 00:07:22.016 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.016 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.275 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:22.275 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:22.275 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:22.275 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.275 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.275 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.275 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.535 [2024-11-26 13:20:10.849071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:22.535 [2024-11-26 13:20:10.849182] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:22.535 [2024-11-26 13:20:10.913736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.535 [2024-11-26 13:20:10.913790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:22.535 [2024-11-26 13:20:10.913808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62468 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62468 ']' 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62468 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.535 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62468 00:07:22.535 killing process with pid 62468 00:07:22.535 13:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.535 13:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.535 13:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62468' 00:07:22.535 13:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62468 00:07:22.535 [2024-11-26 13:20:11.005866] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.535 13:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62468 00:07:22.535 [2024-11-26 13:20:11.018373] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.473 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:23.473 00:07:23.473 real 0m5.125s 00:07:23.473 user 0m7.896s 00:07:23.473 sys 0m0.742s 00:07:23.473 ************************************ 00:07:23.473 END TEST raid_state_function_test_sb 00:07:23.473 ************************************ 00:07:23.473 13:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.473 13:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.473 13:20:11 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:23.473 13:20:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:23.473 13:20:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.473 13:20:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.473 ************************************ 00:07:23.473 START TEST raid_superblock_test 00:07:23.473 ************************************ 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62720 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62720 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62720 ']' 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.473 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.473 [2024-11-26 13:20:12.018689] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:07:23.473 [2024-11-26 13:20:12.018943] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62720 ] 00:07:23.732 [2024-11-26 13:20:12.199844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.991 [2024-11-26 13:20:12.298979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.991 [2024-11-26 13:20:12.466339] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.991 [2024-11-26 13:20:12.466402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.559 malloc1 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.559 [2024-11-26 13:20:12.934547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:24.559 [2024-11-26 13:20:12.934640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.559 [2024-11-26 13:20:12.934673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:24.559 [2024-11-26 13:20:12.934688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.559 [2024-11-26 13:20:12.937254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.559 [2024-11-26 13:20:12.937307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:24.559 pt1 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:24.559 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.560 malloc2 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.560 [2024-11-26 13:20:12.980420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:24.560 [2024-11-26 13:20:12.980509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.560 [2024-11-26 13:20:12.980537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:24.560 [2024-11-26 13:20:12.980551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.560 [2024-11-26 13:20:12.983022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.560 [2024-11-26 13:20:12.983079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:24.560 pt2 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.560 [2024-11-26 13:20:12.992513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:24.560 [2024-11-26 13:20:12.994690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:24.560 [2024-11-26 13:20:12.994874] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:24.560 [2024-11-26 13:20:12.994894] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:24.560 [2024-11-26 13:20:12.995200] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:24.560 [2024-11-26 13:20:12.995404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:24.560 [2024-11-26 13:20:12.995434] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:24.560 [2024-11-26 13:20:12.995599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.560 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.560 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.560 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.560 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.560 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.560 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.560 "name": "raid_bdev1", 00:07:24.560 "uuid": "f5de9c1f-a52a-4b4f-bdf5-24415989754b", 00:07:24.560 "strip_size_kb": 0, 00:07:24.560 "state": "online", 00:07:24.560 "raid_level": "raid1", 00:07:24.560 "superblock": true, 00:07:24.560 "num_base_bdevs": 2, 00:07:24.560 "num_base_bdevs_discovered": 2, 00:07:24.560 "num_base_bdevs_operational": 2, 00:07:24.560 "base_bdevs_list": [ 00:07:24.560 { 00:07:24.560 "name": "pt1", 00:07:24.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:24.560 "is_configured": true, 00:07:24.560 "data_offset": 2048, 00:07:24.560 "data_size": 63488 00:07:24.560 }, 00:07:24.560 { 00:07:24.560 "name": "pt2", 00:07:24.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:24.560 "is_configured": true, 00:07:24.560 "data_offset": 2048, 00:07:24.560 "data_size": 63488 00:07:24.560 } 00:07:24.560 ] 00:07:24.560 }' 00:07:24.560 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.560 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:25.129 [2024-11-26 13:20:13.516865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:25.129 "name": "raid_bdev1", 00:07:25.129 "aliases": [ 00:07:25.129 "f5de9c1f-a52a-4b4f-bdf5-24415989754b" 00:07:25.129 ], 00:07:25.129 "product_name": "Raid Volume", 00:07:25.129 "block_size": 512, 00:07:25.129 "num_blocks": 63488, 00:07:25.129 "uuid": "f5de9c1f-a52a-4b4f-bdf5-24415989754b", 00:07:25.129 "assigned_rate_limits": { 00:07:25.129 "rw_ios_per_sec": 0, 00:07:25.129 "rw_mbytes_per_sec": 0, 00:07:25.129 "r_mbytes_per_sec": 0, 00:07:25.129 "w_mbytes_per_sec": 0 00:07:25.129 }, 00:07:25.129 "claimed": false, 00:07:25.129 "zoned": false, 00:07:25.129 "supported_io_types": { 00:07:25.129 "read": true, 00:07:25.129 "write": true, 00:07:25.129 "unmap": false, 00:07:25.129 "flush": false, 00:07:25.129 "reset": true, 00:07:25.129 "nvme_admin": false, 00:07:25.129 "nvme_io": false, 00:07:25.129 "nvme_io_md": false, 00:07:25.129 "write_zeroes": true, 00:07:25.129 "zcopy": false, 00:07:25.129 "get_zone_info": false, 00:07:25.129 "zone_management": false, 00:07:25.129 "zone_append": false, 00:07:25.129 "compare": false, 00:07:25.129 "compare_and_write": false, 00:07:25.129 "abort": false, 00:07:25.129 "seek_hole": false, 00:07:25.129 "seek_data": false, 00:07:25.129 "copy": false, 00:07:25.129 "nvme_iov_md": false 00:07:25.129 }, 00:07:25.129 "memory_domains": [ 00:07:25.129 { 00:07:25.129 "dma_device_id": "system", 00:07:25.129 "dma_device_type": 1 00:07:25.129 }, 00:07:25.129 { 00:07:25.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.129 "dma_device_type": 2 00:07:25.129 }, 00:07:25.129 { 00:07:25.129 "dma_device_id": "system", 00:07:25.129 "dma_device_type": 1 00:07:25.129 }, 00:07:25.129 { 00:07:25.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.129 "dma_device_type": 2 00:07:25.129 } 00:07:25.129 ], 00:07:25.129 "driver_specific": { 00:07:25.129 "raid": { 00:07:25.129 "uuid": "f5de9c1f-a52a-4b4f-bdf5-24415989754b", 00:07:25.129 "strip_size_kb": 0, 00:07:25.129 "state": "online", 00:07:25.129 "raid_level": "raid1", 00:07:25.129 "superblock": true, 00:07:25.129 "num_base_bdevs": 2, 00:07:25.129 "num_base_bdevs_discovered": 2, 00:07:25.129 "num_base_bdevs_operational": 2, 00:07:25.129 "base_bdevs_list": [ 00:07:25.129 { 00:07:25.129 "name": "pt1", 00:07:25.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.129 "is_configured": true, 00:07:25.129 "data_offset": 2048, 00:07:25.129 "data_size": 63488 00:07:25.129 }, 00:07:25.129 { 00:07:25.129 "name": "pt2", 00:07:25.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.129 "is_configured": true, 00:07:25.129 "data_offset": 2048, 00:07:25.129 "data_size": 63488 00:07:25.129 } 00:07:25.129 ] 00:07:25.129 } 00:07:25.129 } 00:07:25.129 }' 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:25.129 pt2' 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.129 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.389 [2024-11-26 13:20:13.784888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f5de9c1f-a52a-4b4f-bdf5-24415989754b 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f5de9c1f-a52a-4b4f-bdf5-24415989754b ']' 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.389 [2024-11-26 13:20:13.832623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:25.389 [2024-11-26 13:20:13.832646] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:25.389 [2024-11-26 13:20:13.832714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.389 [2024-11-26 13:20:13.832767] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.389 [2024-11-26 13:20:13.832785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.389 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.649 [2024-11-26 13:20:13.976703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:25.649 [2024-11-26 13:20:13.979064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:25.649 [2024-11-26 13:20:13.979353] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:25.649 [2024-11-26 13:20:13.979425] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:25.649 [2024-11-26 13:20:13.979450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:25.649 [2024-11-26 13:20:13.979464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:25.649 request: 00:07:25.649 { 00:07:25.649 "name": "raid_bdev1", 00:07:25.649 "raid_level": "raid1", 00:07:25.649 "base_bdevs": [ 00:07:25.649 "malloc1", 00:07:25.649 "malloc2" 00:07:25.649 ], 00:07:25.649 "superblock": false, 00:07:25.649 "method": "bdev_raid_create", 00:07:25.649 "req_id": 1 00:07:25.649 } 00:07:25.649 Got JSON-RPC error response 00:07:25.649 response: 00:07:25.649 { 00:07:25.649 "code": -17, 00:07:25.649 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:25.649 } 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.649 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.649 [2024-11-26 13:20:14.040679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:25.649 [2024-11-26 13:20:14.040732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.649 [2024-11-26 13:20:14.040750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:25.649 [2024-11-26 13:20:14.040763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.649 [2024-11-26 13:20:14.043243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.649 [2024-11-26 13:20:14.043485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:25.649 [2024-11-26 13:20:14.043593] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:25.649 [2024-11-26 13:20:14.043677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:25.649 pt1 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.649 "name": "raid_bdev1", 00:07:25.649 "uuid": "f5de9c1f-a52a-4b4f-bdf5-24415989754b", 00:07:25.649 "strip_size_kb": 0, 00:07:25.649 "state": "configuring", 00:07:25.649 "raid_level": "raid1", 00:07:25.649 "superblock": true, 00:07:25.649 "num_base_bdevs": 2, 00:07:25.649 "num_base_bdevs_discovered": 1, 00:07:25.649 "num_base_bdevs_operational": 2, 00:07:25.649 "base_bdevs_list": [ 00:07:25.649 { 00:07:25.649 "name": "pt1", 00:07:25.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.649 "is_configured": true, 00:07:25.649 "data_offset": 2048, 00:07:25.649 "data_size": 63488 00:07:25.649 }, 00:07:25.649 { 00:07:25.649 "name": null, 00:07:25.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.649 "is_configured": false, 00:07:25.649 "data_offset": 2048, 00:07:25.649 "data_size": 63488 00:07:25.649 } 00:07:25.649 ] 00:07:25.649 }' 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.649 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.217 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:26.217 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:26.217 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:26.217 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:26.217 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.217 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.217 [2024-11-26 13:20:14.552815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:26.217 [2024-11-26 13:20:14.552871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.218 [2024-11-26 13:20:14.552893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:26.218 [2024-11-26 13:20:14.552906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.218 [2024-11-26 13:20:14.553306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.218 [2024-11-26 13:20:14.553348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:26.218 [2024-11-26 13:20:14.553428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:26.218 [2024-11-26 13:20:14.553456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:26.218 [2024-11-26 13:20:14.553601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:26.218 [2024-11-26 13:20:14.553620] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:26.218 [2024-11-26 13:20:14.553909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:26.218 [2024-11-26 13:20:14.554134] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:26.218 [2024-11-26 13:20:14.554149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:26.218 [2024-11-26 13:20:14.554332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.218 pt2 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.218 "name": "raid_bdev1", 00:07:26.218 "uuid": "f5de9c1f-a52a-4b4f-bdf5-24415989754b", 00:07:26.218 "strip_size_kb": 0, 00:07:26.218 "state": "online", 00:07:26.218 "raid_level": "raid1", 00:07:26.218 "superblock": true, 00:07:26.218 "num_base_bdevs": 2, 00:07:26.218 "num_base_bdevs_discovered": 2, 00:07:26.218 "num_base_bdevs_operational": 2, 00:07:26.218 "base_bdevs_list": [ 00:07:26.218 { 00:07:26.218 "name": "pt1", 00:07:26.218 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.218 "is_configured": true, 00:07:26.218 "data_offset": 2048, 00:07:26.218 "data_size": 63488 00:07:26.218 }, 00:07:26.218 { 00:07:26.218 "name": "pt2", 00:07:26.218 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.218 "is_configured": true, 00:07:26.218 "data_offset": 2048, 00:07:26.218 "data_size": 63488 00:07:26.218 } 00:07:26.218 ] 00:07:26.218 }' 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.218 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.787 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:26.787 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:26.787 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.787 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.787 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.787 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.787 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:26.787 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.787 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.787 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.787 [2024-11-26 13:20:15.093150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.787 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.787 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.787 "name": "raid_bdev1", 00:07:26.787 "aliases": [ 00:07:26.787 "f5de9c1f-a52a-4b4f-bdf5-24415989754b" 00:07:26.787 ], 00:07:26.787 "product_name": "Raid Volume", 00:07:26.787 "block_size": 512, 00:07:26.787 "num_blocks": 63488, 00:07:26.787 "uuid": "f5de9c1f-a52a-4b4f-bdf5-24415989754b", 00:07:26.787 "assigned_rate_limits": { 00:07:26.787 "rw_ios_per_sec": 0, 00:07:26.787 "rw_mbytes_per_sec": 0, 00:07:26.787 "r_mbytes_per_sec": 0, 00:07:26.787 "w_mbytes_per_sec": 0 00:07:26.787 }, 00:07:26.787 "claimed": false, 00:07:26.787 "zoned": false, 00:07:26.787 "supported_io_types": { 00:07:26.787 "read": true, 00:07:26.787 "write": true, 00:07:26.787 "unmap": false, 00:07:26.787 "flush": false, 00:07:26.787 "reset": true, 00:07:26.787 "nvme_admin": false, 00:07:26.787 "nvme_io": false, 00:07:26.787 "nvme_io_md": false, 00:07:26.787 "write_zeroes": true, 00:07:26.787 "zcopy": false, 00:07:26.787 "get_zone_info": false, 00:07:26.787 "zone_management": false, 00:07:26.787 "zone_append": false, 00:07:26.787 "compare": false, 00:07:26.787 "compare_and_write": false, 00:07:26.787 "abort": false, 00:07:26.787 "seek_hole": false, 00:07:26.787 "seek_data": false, 00:07:26.787 "copy": false, 00:07:26.787 "nvme_iov_md": false 00:07:26.787 }, 00:07:26.787 "memory_domains": [ 00:07:26.787 { 00:07:26.787 "dma_device_id": "system", 00:07:26.787 "dma_device_type": 1 00:07:26.787 }, 00:07:26.787 { 00:07:26.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.787 "dma_device_type": 2 00:07:26.787 }, 00:07:26.787 { 00:07:26.787 "dma_device_id": "system", 00:07:26.787 "dma_device_type": 1 00:07:26.787 }, 00:07:26.787 { 00:07:26.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.787 "dma_device_type": 2 00:07:26.787 } 00:07:26.787 ], 00:07:26.787 "driver_specific": { 00:07:26.787 "raid": { 00:07:26.787 "uuid": "f5de9c1f-a52a-4b4f-bdf5-24415989754b", 00:07:26.787 "strip_size_kb": 0, 00:07:26.787 "state": "online", 00:07:26.787 "raid_level": "raid1", 00:07:26.787 "superblock": true, 00:07:26.787 "num_base_bdevs": 2, 00:07:26.787 "num_base_bdevs_discovered": 2, 00:07:26.787 "num_base_bdevs_operational": 2, 00:07:26.787 "base_bdevs_list": [ 00:07:26.787 { 00:07:26.787 "name": "pt1", 00:07:26.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.787 "is_configured": true, 00:07:26.787 "data_offset": 2048, 00:07:26.787 "data_size": 63488 00:07:26.787 }, 00:07:26.787 { 00:07:26.787 "name": "pt2", 00:07:26.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.788 "is_configured": true, 00:07:26.788 "data_offset": 2048, 00:07:26.788 "data_size": 63488 00:07:26.788 } 00:07:26.788 ] 00:07:26.788 } 00:07:26.788 } 00:07:26.788 }' 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:26.788 pt2' 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.788 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.047 [2024-11-26 13:20:15.357209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f5de9c1f-a52a-4b4f-bdf5-24415989754b '!=' f5de9c1f-a52a-4b4f-bdf5-24415989754b ']' 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.047 [2024-11-26 13:20:15.405036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.047 "name": "raid_bdev1", 00:07:27.047 "uuid": "f5de9c1f-a52a-4b4f-bdf5-24415989754b", 00:07:27.047 "strip_size_kb": 0, 00:07:27.047 "state": "online", 00:07:27.047 "raid_level": "raid1", 00:07:27.047 "superblock": true, 00:07:27.047 "num_base_bdevs": 2, 00:07:27.047 "num_base_bdevs_discovered": 1, 00:07:27.047 "num_base_bdevs_operational": 1, 00:07:27.047 "base_bdevs_list": [ 00:07:27.047 { 00:07:27.047 "name": null, 00:07:27.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.047 "is_configured": false, 00:07:27.047 "data_offset": 0, 00:07:27.047 "data_size": 63488 00:07:27.047 }, 00:07:27.047 { 00:07:27.047 "name": "pt2", 00:07:27.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.047 "is_configured": true, 00:07:27.047 "data_offset": 2048, 00:07:27.047 "data_size": 63488 00:07:27.047 } 00:07:27.047 ] 00:07:27.047 }' 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.047 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.616 [2024-11-26 13:20:15.921135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.616 [2024-11-26 13:20:15.921159] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.616 [2024-11-26 13:20:15.921208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.616 [2024-11-26 13:20:15.921261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.616 [2024-11-26 13:20:15.921279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.616 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.616 [2024-11-26 13:20:15.997151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:27.616 [2024-11-26 13:20:15.997408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.616 [2024-11-26 13:20:15.997441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:27.616 [2024-11-26 13:20:15.997457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.616 [2024-11-26 13:20:15.999928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.616 [2024-11-26 13:20:15.999974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:27.616 [2024-11-26 13:20:16.000042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:27.616 [2024-11-26 13:20:16.000087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:27.616 [2024-11-26 13:20:16.000180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:27.616 [2024-11-26 13:20:16.000199] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:27.616 [2024-11-26 13:20:16.000480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:27.616 [2024-11-26 13:20:16.000701] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:27.616 [2024-11-26 13:20:16.000715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:27.616 [2024-11-26 13:20:16.000854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.616 pt2 00:07:27.616 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.616 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:27.616 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.616 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.617 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:27.617 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:27.617 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:27.617 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.617 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.617 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.617 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.617 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.617 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.617 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.617 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.617 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.617 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.617 "name": "raid_bdev1", 00:07:27.617 "uuid": "f5de9c1f-a52a-4b4f-bdf5-24415989754b", 00:07:27.617 "strip_size_kb": 0, 00:07:27.617 "state": "online", 00:07:27.617 "raid_level": "raid1", 00:07:27.617 "superblock": true, 00:07:27.617 "num_base_bdevs": 2, 00:07:27.617 "num_base_bdevs_discovered": 1, 00:07:27.617 "num_base_bdevs_operational": 1, 00:07:27.617 "base_bdevs_list": [ 00:07:27.617 { 00:07:27.617 "name": null, 00:07:27.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.617 "is_configured": false, 00:07:27.617 "data_offset": 2048, 00:07:27.617 "data_size": 63488 00:07:27.617 }, 00:07:27.617 { 00:07:27.617 "name": "pt2", 00:07:27.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.617 "is_configured": true, 00:07:27.617 "data_offset": 2048, 00:07:27.617 "data_size": 63488 00:07:27.617 } 00:07:27.617 ] 00:07:27.617 }' 00:07:27.617 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.617 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.185 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.186 [2024-11-26 13:20:16.529215] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:28.186 [2024-11-26 13:20:16.529441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:28.186 [2024-11-26 13:20:16.529507] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.186 [2024-11-26 13:20:16.529556] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.186 [2024-11-26 13:20:16.529569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.186 [2024-11-26 13:20:16.593272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:28.186 [2024-11-26 13:20:16.593346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.186 [2024-11-26 13:20:16.593368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:28.186 [2024-11-26 13:20:16.593380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.186 [2024-11-26 13:20:16.595876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.186 [2024-11-26 13:20:16.595915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:28.186 [2024-11-26 13:20:16.596003] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:28.186 [2024-11-26 13:20:16.596045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:28.186 [2024-11-26 13:20:16.596176] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:28.186 [2024-11-26 13:20:16.596195] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:28.186 [2024-11-26 13:20:16.596213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:28.186 [2024-11-26 13:20:16.596319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:28.186 [2024-11-26 13:20:16.596414] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:28.186 [2024-11-26 13:20:16.596428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:28.186 [2024-11-26 13:20:16.596751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:28.186 [2024-11-26 13:20:16.596916] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:28.186 [2024-11-26 13:20:16.596949] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:28.186 [2024-11-26 13:20:16.597117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.186 pt1 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.186 "name": "raid_bdev1", 00:07:28.186 "uuid": "f5de9c1f-a52a-4b4f-bdf5-24415989754b", 00:07:28.186 "strip_size_kb": 0, 00:07:28.186 "state": "online", 00:07:28.186 "raid_level": "raid1", 00:07:28.186 "superblock": true, 00:07:28.186 "num_base_bdevs": 2, 00:07:28.186 "num_base_bdevs_discovered": 1, 00:07:28.186 "num_base_bdevs_operational": 1, 00:07:28.186 "base_bdevs_list": [ 00:07:28.186 { 00:07:28.186 "name": null, 00:07:28.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.186 "is_configured": false, 00:07:28.186 "data_offset": 2048, 00:07:28.186 "data_size": 63488 00:07:28.186 }, 00:07:28.186 { 00:07:28.186 "name": "pt2", 00:07:28.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.186 "is_configured": true, 00:07:28.186 "data_offset": 2048, 00:07:28.186 "data_size": 63488 00:07:28.186 } 00:07:28.186 ] 00:07:28.186 }' 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.186 13:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.754 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.755 [2024-11-26 13:20:17.141590] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f5de9c1f-a52a-4b4f-bdf5-24415989754b '!=' f5de9c1f-a52a-4b4f-bdf5-24415989754b ']' 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62720 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62720 ']' 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62720 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62720 00:07:28.755 killing process with pid 62720 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62720' 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62720 00:07:28.755 [2024-11-26 13:20:17.215160] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.755 [2024-11-26 13:20:17.215219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.755 [2024-11-26 13:20:17.215291] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.755 13:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62720 00:07:28.755 [2024-11-26 13:20:17.215311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:29.014 [2024-11-26 13:20:17.354014] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:29.951 13:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:29.951 00:07:29.951 real 0m6.280s 00:07:29.951 user 0m10.084s 00:07:29.951 sys 0m0.931s 00:07:29.951 ************************************ 00:07:29.951 END TEST raid_superblock_test 00:07:29.951 ************************************ 00:07:29.951 13:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.951 13:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.951 13:20:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:29.951 13:20:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:29.951 13:20:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.951 13:20:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:29.951 ************************************ 00:07:29.951 START TEST raid_read_error_test 00:07:29.951 ************************************ 00:07:29.951 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:07:29.951 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:29.951 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:29.951 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:29.951 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.glwpbCgkBV 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63049 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63049 00:07:29.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63049 ']' 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.952 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.952 [2024-11-26 13:20:18.374583] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:07:29.952 [2024-11-26 13:20:18.374772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63049 ] 00:07:30.211 [2024-11-26 13:20:18.556267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.211 [2024-11-26 13:20:18.653195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.469 [2024-11-26 13:20:18.820258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.469 [2024-11-26 13:20:18.820302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.037 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.037 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:31.037 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:31.037 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:31.037 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.037 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.037 BaseBdev1_malloc 00:07:31.037 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.037 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:31.037 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.037 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.037 true 00:07:31.037 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.037 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:31.037 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.037 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.037 [2024-11-26 13:20:19.343045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:31.037 [2024-11-26 13:20:19.343192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.038 [2024-11-26 13:20:19.343230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:31.038 [2024-11-26 13:20:19.343263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.038 [2024-11-26 13:20:19.345569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.038 [2024-11-26 13:20:19.345628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:31.038 BaseBdev1 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.038 BaseBdev2_malloc 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.038 true 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.038 [2024-11-26 13:20:19.392416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:31.038 [2024-11-26 13:20:19.392475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.038 [2024-11-26 13:20:19.392496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:31.038 [2024-11-26 13:20:19.392510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.038 [2024-11-26 13:20:19.394797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.038 [2024-11-26 13:20:19.394842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:31.038 BaseBdev2 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.038 [2024-11-26 13:20:19.400477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:31.038 [2024-11-26 13:20:19.402608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:31.038 [2024-11-26 13:20:19.402827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:31.038 [2024-11-26 13:20:19.402849] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:31.038 [2024-11-26 13:20:19.403094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:31.038 [2024-11-26 13:20:19.403318] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:31.038 [2024-11-26 13:20:19.403334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:31.038 [2024-11-26 13:20:19.403492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.038 "name": "raid_bdev1", 00:07:31.038 "uuid": "7d048786-d6d9-4178-b964-9d37ecbbb354", 00:07:31.038 "strip_size_kb": 0, 00:07:31.038 "state": "online", 00:07:31.038 "raid_level": "raid1", 00:07:31.038 "superblock": true, 00:07:31.038 "num_base_bdevs": 2, 00:07:31.038 "num_base_bdevs_discovered": 2, 00:07:31.038 "num_base_bdevs_operational": 2, 00:07:31.038 "base_bdevs_list": [ 00:07:31.038 { 00:07:31.038 "name": "BaseBdev1", 00:07:31.038 "uuid": "a24e3870-fcd3-512f-99a1-c8d5e3df8a69", 00:07:31.038 "is_configured": true, 00:07:31.038 "data_offset": 2048, 00:07:31.038 "data_size": 63488 00:07:31.038 }, 00:07:31.038 { 00:07:31.038 "name": "BaseBdev2", 00:07:31.038 "uuid": "e5207c1a-fc2a-5176-977b-ad7ff09fb48f", 00:07:31.038 "is_configured": true, 00:07:31.038 "data_offset": 2048, 00:07:31.038 "data_size": 63488 00:07:31.038 } 00:07:31.038 ] 00:07:31.038 }' 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.038 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.606 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:31.606 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:31.606 [2024-11-26 13:20:19.973640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.542 "name": "raid_bdev1", 00:07:32.542 "uuid": "7d048786-d6d9-4178-b964-9d37ecbbb354", 00:07:32.542 "strip_size_kb": 0, 00:07:32.542 "state": "online", 00:07:32.542 "raid_level": "raid1", 00:07:32.542 "superblock": true, 00:07:32.542 "num_base_bdevs": 2, 00:07:32.542 "num_base_bdevs_discovered": 2, 00:07:32.542 "num_base_bdevs_operational": 2, 00:07:32.542 "base_bdevs_list": [ 00:07:32.542 { 00:07:32.542 "name": "BaseBdev1", 00:07:32.542 "uuid": "a24e3870-fcd3-512f-99a1-c8d5e3df8a69", 00:07:32.542 "is_configured": true, 00:07:32.542 "data_offset": 2048, 00:07:32.542 "data_size": 63488 00:07:32.542 }, 00:07:32.542 { 00:07:32.542 "name": "BaseBdev2", 00:07:32.542 "uuid": "e5207c1a-fc2a-5176-977b-ad7ff09fb48f", 00:07:32.542 "is_configured": true, 00:07:32.542 "data_offset": 2048, 00:07:32.542 "data_size": 63488 00:07:32.542 } 00:07:32.542 ] 00:07:32.542 }' 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.542 13:20:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.111 13:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:33.111 13:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.111 13:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.111 [2024-11-26 13:20:21.425564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:33.111 [2024-11-26 13:20:21.425877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.111 [2024-11-26 13:20:21.428862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.111 [2024-11-26 13:20:21.429033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.111 [2024-11-26 13:20:21.429165] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.111 [2024-11-26 13:20:21.429327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:33.111 { 00:07:33.111 "results": [ 00:07:33.111 { 00:07:33.111 "job": "raid_bdev1", 00:07:33.111 "core_mask": "0x1", 00:07:33.111 "workload": "randrw", 00:07:33.111 "percentage": 50, 00:07:33.111 "status": "finished", 00:07:33.111 "queue_depth": 1, 00:07:33.111 "io_size": 131072, 00:07:33.111 "runtime": 1.450198, 00:07:33.111 "iops": 16292.25802269759, 00:07:33.111 "mibps": 2036.5322528371987, 00:07:33.111 "io_failed": 0, 00:07:33.111 "io_timeout": 0, 00:07:33.111 "avg_latency_us": 57.85528390092998, 00:07:33.111 "min_latency_us": 35.60727272727273, 00:07:33.111 "max_latency_us": 1400.0872727272726 00:07:33.111 } 00:07:33.111 ], 00:07:33.111 "core_count": 1 00:07:33.111 } 00:07:33.111 13:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.111 13:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63049 00:07:33.111 13:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63049 ']' 00:07:33.111 13:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63049 00:07:33.111 13:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:33.111 13:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.111 13:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63049 00:07:33.111 killing process with pid 63049 00:07:33.111 13:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.111 13:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.111 13:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63049' 00:07:33.111 13:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63049 00:07:33.111 [2024-11-26 13:20:21.465823] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.111 13:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63049 00:07:33.111 [2024-11-26 13:20:21.557196] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.048 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.glwpbCgkBV 00:07:34.048 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:34.048 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:34.048 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:34.048 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:34.048 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.048 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:34.048 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:34.048 00:07:34.048 real 0m4.189s 00:07:34.048 user 0m5.257s 00:07:34.048 sys 0m0.537s 00:07:34.048 13:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.048 ************************************ 00:07:34.048 END TEST raid_read_error_test 00:07:34.048 ************************************ 00:07:34.048 13:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.048 13:20:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:34.048 13:20:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:34.048 13:20:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.048 13:20:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.048 ************************************ 00:07:34.048 START TEST raid_write_error_test 00:07:34.048 ************************************ 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3tYHdmscWN 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63189 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63189 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63189 ']' 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.048 13:20:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.049 [2024-11-26 13:20:22.582720] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:07:34.049 [2024-11-26 13:20:22.582845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63189 ] 00:07:34.307 [2024-11-26 13:20:22.744305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.307 [2024-11-26 13:20:22.843089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.566 [2024-11-26 13:20:23.008931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.566 [2024-11-26 13:20:23.008971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.134 BaseBdev1_malloc 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.134 true 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.134 [2024-11-26 13:20:23.617702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:35.134 [2024-11-26 13:20:23.617766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.134 [2024-11-26 13:20:23.617790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:35.134 [2024-11-26 13:20:23.617804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.134 [2024-11-26 13:20:23.620072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.134 [2024-11-26 13:20:23.620114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:35.134 BaseBdev1 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.134 BaseBdev2_malloc 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.134 true 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.134 [2024-11-26 13:20:23.667347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:35.134 [2024-11-26 13:20:23.667405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.134 [2024-11-26 13:20:23.667426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:35.134 [2024-11-26 13:20:23.667440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.134 [2024-11-26 13:20:23.669668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.134 [2024-11-26 13:20:23.669708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:35.134 BaseBdev2 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.134 [2024-11-26 13:20:23.675409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.134 [2024-11-26 13:20:23.677390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.134 [2024-11-26 13:20:23.677609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:35.134 [2024-11-26 13:20:23.677629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:35.134 [2024-11-26 13:20:23.677868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:35.134 [2024-11-26 13:20:23.678079] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:35.134 [2024-11-26 13:20:23.678094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:35.134 [2024-11-26 13:20:23.678265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.134 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.393 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.393 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.393 "name": "raid_bdev1", 00:07:35.393 "uuid": "ed226b92-07e8-4d7b-9dd5-e41242144dfb", 00:07:35.393 "strip_size_kb": 0, 00:07:35.393 "state": "online", 00:07:35.393 "raid_level": "raid1", 00:07:35.393 "superblock": true, 00:07:35.393 "num_base_bdevs": 2, 00:07:35.393 "num_base_bdevs_discovered": 2, 00:07:35.393 "num_base_bdevs_operational": 2, 00:07:35.393 "base_bdevs_list": [ 00:07:35.393 { 00:07:35.393 "name": "BaseBdev1", 00:07:35.393 "uuid": "bfddc3c9-e534-52e2-b906-28252c762eb7", 00:07:35.393 "is_configured": true, 00:07:35.393 "data_offset": 2048, 00:07:35.393 "data_size": 63488 00:07:35.393 }, 00:07:35.393 { 00:07:35.393 "name": "BaseBdev2", 00:07:35.393 "uuid": "8caabad0-f563-513f-96c2-9b20e1f6d6e1", 00:07:35.393 "is_configured": true, 00:07:35.393 "data_offset": 2048, 00:07:35.393 "data_size": 63488 00:07:35.393 } 00:07:35.393 ] 00:07:35.393 }' 00:07:35.393 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.393 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.652 13:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:35.652 13:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:35.910 [2024-11-26 13:20:24.308574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:36.846 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:36.846 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.846 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.846 [2024-11-26 13:20:25.194622] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:36.846 [2024-11-26 13:20:25.194705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:36.846 [2024-11-26 13:20:25.194946] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:07:36.846 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.846 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:36.846 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:36.846 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:36.846 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.847 "name": "raid_bdev1", 00:07:36.847 "uuid": "ed226b92-07e8-4d7b-9dd5-e41242144dfb", 00:07:36.847 "strip_size_kb": 0, 00:07:36.847 "state": "online", 00:07:36.847 "raid_level": "raid1", 00:07:36.847 "superblock": true, 00:07:36.847 "num_base_bdevs": 2, 00:07:36.847 "num_base_bdevs_discovered": 1, 00:07:36.847 "num_base_bdevs_operational": 1, 00:07:36.847 "base_bdevs_list": [ 00:07:36.847 { 00:07:36.847 "name": null, 00:07:36.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.847 "is_configured": false, 00:07:36.847 "data_offset": 0, 00:07:36.847 "data_size": 63488 00:07:36.847 }, 00:07:36.847 { 00:07:36.847 "name": "BaseBdev2", 00:07:36.847 "uuid": "8caabad0-f563-513f-96c2-9b20e1f6d6e1", 00:07:36.847 "is_configured": true, 00:07:36.847 "data_offset": 2048, 00:07:36.847 "data_size": 63488 00:07:36.847 } 00:07:36.847 ] 00:07:36.847 }' 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.847 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.414 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:37.414 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.414 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.414 [2024-11-26 13:20:25.713993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.414 [2024-11-26 13:20:25.714024] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.414 [2024-11-26 13:20:25.716534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.414 [2024-11-26 13:20:25.716578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.414 [2024-11-26 13:20:25.716640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.414 [2024-11-26 13:20:25.716658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:37.414 { 00:07:37.414 "results": [ 00:07:37.414 { 00:07:37.414 "job": "raid_bdev1", 00:07:37.414 "core_mask": "0x1", 00:07:37.414 "workload": "randrw", 00:07:37.414 "percentage": 50, 00:07:37.414 "status": "finished", 00:07:37.414 "queue_depth": 1, 00:07:37.414 "io_size": 131072, 00:07:37.414 "runtime": 1.403492, 00:07:37.414 "iops": 20193.91631729999, 00:07:37.414 "mibps": 2524.239539662499, 00:07:37.414 "io_failed": 0, 00:07:37.414 "io_timeout": 0, 00:07:37.414 "avg_latency_us": 46.29081517311282, 00:07:37.414 "min_latency_us": 32.11636363636364, 00:07:37.414 "max_latency_us": 1385.1927272727273 00:07:37.414 } 00:07:37.414 ], 00:07:37.414 "core_count": 1 00:07:37.414 } 00:07:37.414 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.414 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63189 00:07:37.414 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63189 ']' 00:07:37.414 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63189 00:07:37.414 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:37.414 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.414 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63189 00:07:37.414 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.414 killing process with pid 63189 00:07:37.414 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.414 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63189' 00:07:37.414 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63189 00:07:37.414 [2024-11-26 13:20:25.752593] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.414 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63189 00:07:37.414 [2024-11-26 13:20:25.840579] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.351 13:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:38.351 13:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3tYHdmscWN 00:07:38.351 13:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:38.351 13:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:38.351 13:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:38.351 13:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.351 13:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:38.351 13:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:38.351 00:07:38.351 real 0m4.224s 00:07:38.351 user 0m5.384s 00:07:38.351 sys 0m0.522s 00:07:38.351 13:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.351 ************************************ 00:07:38.351 END TEST raid_write_error_test 00:07:38.351 ************************************ 00:07:38.351 13:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.351 13:20:26 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:38.351 13:20:26 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:38.351 13:20:26 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:38.351 13:20:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:38.351 13:20:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.351 13:20:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.351 ************************************ 00:07:38.351 START TEST raid_state_function_test 00:07:38.351 ************************************ 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63327 00:07:38.351 Process raid pid: 63327 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63327' 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63327 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63327 ']' 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.351 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.351 [2024-11-26 13:20:26.884568] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:07:38.351 [2024-11-26 13:20:26.884759] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.610 [2024-11-26 13:20:27.067733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.610 [2024-11-26 13:20:27.170853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.869 [2024-11-26 13:20:27.340666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.869 [2024-11-26 13:20:27.340711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.435 [2024-11-26 13:20:27.743355] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:39.435 [2024-11-26 13:20:27.743445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:39.435 [2024-11-26 13:20:27.743460] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:39.435 [2024-11-26 13:20:27.743475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:39.435 [2024-11-26 13:20:27.743484] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:39.435 [2024-11-26 13:20:27.743497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.435 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.435 "name": "Existed_Raid", 00:07:39.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.435 "strip_size_kb": 64, 00:07:39.435 "state": "configuring", 00:07:39.435 "raid_level": "raid0", 00:07:39.435 "superblock": false, 00:07:39.435 "num_base_bdevs": 3, 00:07:39.435 "num_base_bdevs_discovered": 0, 00:07:39.435 "num_base_bdevs_operational": 3, 00:07:39.435 "base_bdevs_list": [ 00:07:39.435 { 00:07:39.435 "name": "BaseBdev1", 00:07:39.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.435 "is_configured": false, 00:07:39.436 "data_offset": 0, 00:07:39.436 "data_size": 0 00:07:39.436 }, 00:07:39.436 { 00:07:39.436 "name": "BaseBdev2", 00:07:39.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.436 "is_configured": false, 00:07:39.436 "data_offset": 0, 00:07:39.436 "data_size": 0 00:07:39.436 }, 00:07:39.436 { 00:07:39.436 "name": "BaseBdev3", 00:07:39.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.436 "is_configured": false, 00:07:39.436 "data_offset": 0, 00:07:39.436 "data_size": 0 00:07:39.436 } 00:07:39.436 ] 00:07:39.436 }' 00:07:39.436 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.436 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.694 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:39.694 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.694 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.694 [2024-11-26 13:20:28.235424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:39.694 [2024-11-26 13:20:28.235476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:39.694 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.694 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:39.694 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.694 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.694 [2024-11-26 13:20:28.247432] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:39.694 [2024-11-26 13:20:28.247496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:39.694 [2024-11-26 13:20:28.247509] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:39.694 [2024-11-26 13:20:28.247523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:39.694 [2024-11-26 13:20:28.247531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:39.694 [2024-11-26 13:20:28.247544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:39.694 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.694 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:39.694 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.694 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.953 [2024-11-26 13:20:28.285907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.953 BaseBdev1 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.953 [ 00:07:39.953 { 00:07:39.953 "name": "BaseBdev1", 00:07:39.953 "aliases": [ 00:07:39.953 "e0731f3d-7410-4159-9bd9-07d5d6122057" 00:07:39.953 ], 00:07:39.953 "product_name": "Malloc disk", 00:07:39.953 "block_size": 512, 00:07:39.953 "num_blocks": 65536, 00:07:39.953 "uuid": "e0731f3d-7410-4159-9bd9-07d5d6122057", 00:07:39.953 "assigned_rate_limits": { 00:07:39.953 "rw_ios_per_sec": 0, 00:07:39.953 "rw_mbytes_per_sec": 0, 00:07:39.953 "r_mbytes_per_sec": 0, 00:07:39.953 "w_mbytes_per_sec": 0 00:07:39.953 }, 00:07:39.953 "claimed": true, 00:07:39.953 "claim_type": "exclusive_write", 00:07:39.953 "zoned": false, 00:07:39.953 "supported_io_types": { 00:07:39.953 "read": true, 00:07:39.953 "write": true, 00:07:39.953 "unmap": true, 00:07:39.953 "flush": true, 00:07:39.953 "reset": true, 00:07:39.953 "nvme_admin": false, 00:07:39.953 "nvme_io": false, 00:07:39.953 "nvme_io_md": false, 00:07:39.953 "write_zeroes": true, 00:07:39.953 "zcopy": true, 00:07:39.953 "get_zone_info": false, 00:07:39.953 "zone_management": false, 00:07:39.953 "zone_append": false, 00:07:39.953 "compare": false, 00:07:39.953 "compare_and_write": false, 00:07:39.953 "abort": true, 00:07:39.953 "seek_hole": false, 00:07:39.953 "seek_data": false, 00:07:39.953 "copy": true, 00:07:39.953 "nvme_iov_md": false 00:07:39.953 }, 00:07:39.953 "memory_domains": [ 00:07:39.953 { 00:07:39.953 "dma_device_id": "system", 00:07:39.953 "dma_device_type": 1 00:07:39.953 }, 00:07:39.953 { 00:07:39.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.953 "dma_device_type": 2 00:07:39.953 } 00:07:39.953 ], 00:07:39.953 "driver_specific": {} 00:07:39.953 } 00:07:39.953 ] 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:39.953 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.954 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.954 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.954 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.954 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.954 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.954 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.954 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.954 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.954 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.954 "name": "Existed_Raid", 00:07:39.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.954 "strip_size_kb": 64, 00:07:39.954 "state": "configuring", 00:07:39.954 "raid_level": "raid0", 00:07:39.954 "superblock": false, 00:07:39.954 "num_base_bdevs": 3, 00:07:39.954 "num_base_bdevs_discovered": 1, 00:07:39.954 "num_base_bdevs_operational": 3, 00:07:39.954 "base_bdevs_list": [ 00:07:39.954 { 00:07:39.954 "name": "BaseBdev1", 00:07:39.954 "uuid": "e0731f3d-7410-4159-9bd9-07d5d6122057", 00:07:39.954 "is_configured": true, 00:07:39.954 "data_offset": 0, 00:07:39.954 "data_size": 65536 00:07:39.954 }, 00:07:39.954 { 00:07:39.954 "name": "BaseBdev2", 00:07:39.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.954 "is_configured": false, 00:07:39.954 "data_offset": 0, 00:07:39.954 "data_size": 0 00:07:39.954 }, 00:07:39.954 { 00:07:39.954 "name": "BaseBdev3", 00:07:39.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.954 "is_configured": false, 00:07:39.954 "data_offset": 0, 00:07:39.954 "data_size": 0 00:07:39.954 } 00:07:39.954 ] 00:07:39.954 }' 00:07:39.954 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.954 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.521 [2024-11-26 13:20:28.830048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:40.521 [2024-11-26 13:20:28.830112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.521 [2024-11-26 13:20:28.838111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.521 [2024-11-26 13:20:28.840168] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:40.521 [2024-11-26 13:20:28.840213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:40.521 [2024-11-26 13:20:28.840226] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:40.521 [2024-11-26 13:20:28.840261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.521 "name": "Existed_Raid", 00:07:40.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.521 "strip_size_kb": 64, 00:07:40.521 "state": "configuring", 00:07:40.521 "raid_level": "raid0", 00:07:40.521 "superblock": false, 00:07:40.521 "num_base_bdevs": 3, 00:07:40.521 "num_base_bdevs_discovered": 1, 00:07:40.521 "num_base_bdevs_operational": 3, 00:07:40.521 "base_bdevs_list": [ 00:07:40.521 { 00:07:40.521 "name": "BaseBdev1", 00:07:40.521 "uuid": "e0731f3d-7410-4159-9bd9-07d5d6122057", 00:07:40.521 "is_configured": true, 00:07:40.521 "data_offset": 0, 00:07:40.521 "data_size": 65536 00:07:40.521 }, 00:07:40.521 { 00:07:40.521 "name": "BaseBdev2", 00:07:40.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.521 "is_configured": false, 00:07:40.521 "data_offset": 0, 00:07:40.521 "data_size": 0 00:07:40.521 }, 00:07:40.521 { 00:07:40.521 "name": "BaseBdev3", 00:07:40.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.521 "is_configured": false, 00:07:40.521 "data_offset": 0, 00:07:40.521 "data_size": 0 00:07:40.521 } 00:07:40.521 ] 00:07:40.521 }' 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.521 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.097 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:41.097 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.097 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.097 [2024-11-26 13:20:29.387217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:41.097 BaseBdev2 00:07:41.097 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.097 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:41.097 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:41.097 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:41.097 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:41.097 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:41.097 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:41.097 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.098 [ 00:07:41.098 { 00:07:41.098 "name": "BaseBdev2", 00:07:41.098 "aliases": [ 00:07:41.098 "4a517c71-726a-4d00-97ee-30c590a87b01" 00:07:41.098 ], 00:07:41.098 "product_name": "Malloc disk", 00:07:41.098 "block_size": 512, 00:07:41.098 "num_blocks": 65536, 00:07:41.098 "uuid": "4a517c71-726a-4d00-97ee-30c590a87b01", 00:07:41.098 "assigned_rate_limits": { 00:07:41.098 "rw_ios_per_sec": 0, 00:07:41.098 "rw_mbytes_per_sec": 0, 00:07:41.098 "r_mbytes_per_sec": 0, 00:07:41.098 "w_mbytes_per_sec": 0 00:07:41.098 }, 00:07:41.098 "claimed": true, 00:07:41.098 "claim_type": "exclusive_write", 00:07:41.098 "zoned": false, 00:07:41.098 "supported_io_types": { 00:07:41.098 "read": true, 00:07:41.098 "write": true, 00:07:41.098 "unmap": true, 00:07:41.098 "flush": true, 00:07:41.098 "reset": true, 00:07:41.098 "nvme_admin": false, 00:07:41.098 "nvme_io": false, 00:07:41.098 "nvme_io_md": false, 00:07:41.098 "write_zeroes": true, 00:07:41.098 "zcopy": true, 00:07:41.098 "get_zone_info": false, 00:07:41.098 "zone_management": false, 00:07:41.098 "zone_append": false, 00:07:41.098 "compare": false, 00:07:41.098 "compare_and_write": false, 00:07:41.098 "abort": true, 00:07:41.098 "seek_hole": false, 00:07:41.098 "seek_data": false, 00:07:41.098 "copy": true, 00:07:41.098 "nvme_iov_md": false 00:07:41.098 }, 00:07:41.098 "memory_domains": [ 00:07:41.098 { 00:07:41.098 "dma_device_id": "system", 00:07:41.098 "dma_device_type": 1 00:07:41.098 }, 00:07:41.098 { 00:07:41.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.098 "dma_device_type": 2 00:07:41.098 } 00:07:41.098 ], 00:07:41.098 "driver_specific": {} 00:07:41.098 } 00:07:41.098 ] 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.098 "name": "Existed_Raid", 00:07:41.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.098 "strip_size_kb": 64, 00:07:41.098 "state": "configuring", 00:07:41.098 "raid_level": "raid0", 00:07:41.098 "superblock": false, 00:07:41.098 "num_base_bdevs": 3, 00:07:41.098 "num_base_bdevs_discovered": 2, 00:07:41.098 "num_base_bdevs_operational": 3, 00:07:41.098 "base_bdevs_list": [ 00:07:41.098 { 00:07:41.098 "name": "BaseBdev1", 00:07:41.098 "uuid": "e0731f3d-7410-4159-9bd9-07d5d6122057", 00:07:41.098 "is_configured": true, 00:07:41.098 "data_offset": 0, 00:07:41.098 "data_size": 65536 00:07:41.098 }, 00:07:41.098 { 00:07:41.098 "name": "BaseBdev2", 00:07:41.098 "uuid": "4a517c71-726a-4d00-97ee-30c590a87b01", 00:07:41.098 "is_configured": true, 00:07:41.098 "data_offset": 0, 00:07:41.098 "data_size": 65536 00:07:41.098 }, 00:07:41.098 { 00:07:41.098 "name": "BaseBdev3", 00:07:41.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.098 "is_configured": false, 00:07:41.098 "data_offset": 0, 00:07:41.098 "data_size": 0 00:07:41.098 } 00:07:41.098 ] 00:07:41.098 }' 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.098 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.357 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:41.616 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.616 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.616 [2024-11-26 13:20:29.964431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:41.616 [2024-11-26 13:20:29.964491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:41.616 [2024-11-26 13:20:29.964509] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:41.616 [2024-11-26 13:20:29.964875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:41.616 [2024-11-26 13:20:29.965074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:41.616 [2024-11-26 13:20:29.965095] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:41.616 [2024-11-26 13:20:29.965384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.616 BaseBdev3 00:07:41.616 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.616 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:41.616 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:41.616 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:41.616 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:41.616 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:41.616 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:41.616 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:41.616 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.616 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.616 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.617 [ 00:07:41.617 { 00:07:41.617 "name": "BaseBdev3", 00:07:41.617 "aliases": [ 00:07:41.617 "b42879b8-1642-4b49-b885-2fb2979ddc3e" 00:07:41.617 ], 00:07:41.617 "product_name": "Malloc disk", 00:07:41.617 "block_size": 512, 00:07:41.617 "num_blocks": 65536, 00:07:41.617 "uuid": "b42879b8-1642-4b49-b885-2fb2979ddc3e", 00:07:41.617 "assigned_rate_limits": { 00:07:41.617 "rw_ios_per_sec": 0, 00:07:41.617 "rw_mbytes_per_sec": 0, 00:07:41.617 "r_mbytes_per_sec": 0, 00:07:41.617 "w_mbytes_per_sec": 0 00:07:41.617 }, 00:07:41.617 "claimed": true, 00:07:41.617 "claim_type": "exclusive_write", 00:07:41.617 "zoned": false, 00:07:41.617 "supported_io_types": { 00:07:41.617 "read": true, 00:07:41.617 "write": true, 00:07:41.617 "unmap": true, 00:07:41.617 "flush": true, 00:07:41.617 "reset": true, 00:07:41.617 "nvme_admin": false, 00:07:41.617 "nvme_io": false, 00:07:41.617 "nvme_io_md": false, 00:07:41.617 "write_zeroes": true, 00:07:41.617 "zcopy": true, 00:07:41.617 "get_zone_info": false, 00:07:41.617 "zone_management": false, 00:07:41.617 "zone_append": false, 00:07:41.617 "compare": false, 00:07:41.617 "compare_and_write": false, 00:07:41.617 "abort": true, 00:07:41.617 "seek_hole": false, 00:07:41.617 "seek_data": false, 00:07:41.617 "copy": true, 00:07:41.617 "nvme_iov_md": false 00:07:41.617 }, 00:07:41.617 "memory_domains": [ 00:07:41.617 { 00:07:41.617 "dma_device_id": "system", 00:07:41.617 "dma_device_type": 1 00:07:41.617 }, 00:07:41.617 { 00:07:41.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.617 "dma_device_type": 2 00:07:41.617 } 00:07:41.617 ], 00:07:41.617 "driver_specific": {} 00:07:41.617 } 00:07:41.617 ] 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.617 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.617 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.617 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.617 "name": "Existed_Raid", 00:07:41.617 "uuid": "12a16b3f-7d99-4edc-b2a2-a53b2dabb452", 00:07:41.617 "strip_size_kb": 64, 00:07:41.617 "state": "online", 00:07:41.617 "raid_level": "raid0", 00:07:41.617 "superblock": false, 00:07:41.617 "num_base_bdevs": 3, 00:07:41.617 "num_base_bdevs_discovered": 3, 00:07:41.617 "num_base_bdevs_operational": 3, 00:07:41.617 "base_bdevs_list": [ 00:07:41.617 { 00:07:41.617 "name": "BaseBdev1", 00:07:41.617 "uuid": "e0731f3d-7410-4159-9bd9-07d5d6122057", 00:07:41.617 "is_configured": true, 00:07:41.617 "data_offset": 0, 00:07:41.617 "data_size": 65536 00:07:41.617 }, 00:07:41.617 { 00:07:41.617 "name": "BaseBdev2", 00:07:41.617 "uuid": "4a517c71-726a-4d00-97ee-30c590a87b01", 00:07:41.617 "is_configured": true, 00:07:41.617 "data_offset": 0, 00:07:41.617 "data_size": 65536 00:07:41.617 }, 00:07:41.617 { 00:07:41.617 "name": "BaseBdev3", 00:07:41.617 "uuid": "b42879b8-1642-4b49-b885-2fb2979ddc3e", 00:07:41.617 "is_configured": true, 00:07:41.617 "data_offset": 0, 00:07:41.617 "data_size": 65536 00:07:41.617 } 00:07:41.617 ] 00:07:41.617 }' 00:07:41.617 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.617 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.185 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:42.185 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:42.185 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:42.185 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:42.185 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:42.185 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:42.186 [2024-11-26 13:20:30.520878] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:42.186 "name": "Existed_Raid", 00:07:42.186 "aliases": [ 00:07:42.186 "12a16b3f-7d99-4edc-b2a2-a53b2dabb452" 00:07:42.186 ], 00:07:42.186 "product_name": "Raid Volume", 00:07:42.186 "block_size": 512, 00:07:42.186 "num_blocks": 196608, 00:07:42.186 "uuid": "12a16b3f-7d99-4edc-b2a2-a53b2dabb452", 00:07:42.186 "assigned_rate_limits": { 00:07:42.186 "rw_ios_per_sec": 0, 00:07:42.186 "rw_mbytes_per_sec": 0, 00:07:42.186 "r_mbytes_per_sec": 0, 00:07:42.186 "w_mbytes_per_sec": 0 00:07:42.186 }, 00:07:42.186 "claimed": false, 00:07:42.186 "zoned": false, 00:07:42.186 "supported_io_types": { 00:07:42.186 "read": true, 00:07:42.186 "write": true, 00:07:42.186 "unmap": true, 00:07:42.186 "flush": true, 00:07:42.186 "reset": true, 00:07:42.186 "nvme_admin": false, 00:07:42.186 "nvme_io": false, 00:07:42.186 "nvme_io_md": false, 00:07:42.186 "write_zeroes": true, 00:07:42.186 "zcopy": false, 00:07:42.186 "get_zone_info": false, 00:07:42.186 "zone_management": false, 00:07:42.186 "zone_append": false, 00:07:42.186 "compare": false, 00:07:42.186 "compare_and_write": false, 00:07:42.186 "abort": false, 00:07:42.186 "seek_hole": false, 00:07:42.186 "seek_data": false, 00:07:42.186 "copy": false, 00:07:42.186 "nvme_iov_md": false 00:07:42.186 }, 00:07:42.186 "memory_domains": [ 00:07:42.186 { 00:07:42.186 "dma_device_id": "system", 00:07:42.186 "dma_device_type": 1 00:07:42.186 }, 00:07:42.186 { 00:07:42.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.186 "dma_device_type": 2 00:07:42.186 }, 00:07:42.186 { 00:07:42.186 "dma_device_id": "system", 00:07:42.186 "dma_device_type": 1 00:07:42.186 }, 00:07:42.186 { 00:07:42.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.186 "dma_device_type": 2 00:07:42.186 }, 00:07:42.186 { 00:07:42.186 "dma_device_id": "system", 00:07:42.186 "dma_device_type": 1 00:07:42.186 }, 00:07:42.186 { 00:07:42.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.186 "dma_device_type": 2 00:07:42.186 } 00:07:42.186 ], 00:07:42.186 "driver_specific": { 00:07:42.186 "raid": { 00:07:42.186 "uuid": "12a16b3f-7d99-4edc-b2a2-a53b2dabb452", 00:07:42.186 "strip_size_kb": 64, 00:07:42.186 "state": "online", 00:07:42.186 "raid_level": "raid0", 00:07:42.186 "superblock": false, 00:07:42.186 "num_base_bdevs": 3, 00:07:42.186 "num_base_bdevs_discovered": 3, 00:07:42.186 "num_base_bdevs_operational": 3, 00:07:42.186 "base_bdevs_list": [ 00:07:42.186 { 00:07:42.186 "name": "BaseBdev1", 00:07:42.186 "uuid": "e0731f3d-7410-4159-9bd9-07d5d6122057", 00:07:42.186 "is_configured": true, 00:07:42.186 "data_offset": 0, 00:07:42.186 "data_size": 65536 00:07:42.186 }, 00:07:42.186 { 00:07:42.186 "name": "BaseBdev2", 00:07:42.186 "uuid": "4a517c71-726a-4d00-97ee-30c590a87b01", 00:07:42.186 "is_configured": true, 00:07:42.186 "data_offset": 0, 00:07:42.186 "data_size": 65536 00:07:42.186 }, 00:07:42.186 { 00:07:42.186 "name": "BaseBdev3", 00:07:42.186 "uuid": "b42879b8-1642-4b49-b885-2fb2979ddc3e", 00:07:42.186 "is_configured": true, 00:07:42.186 "data_offset": 0, 00:07:42.186 "data_size": 65536 00:07:42.186 } 00:07:42.186 ] 00:07:42.186 } 00:07:42.186 } 00:07:42.186 }' 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:42.186 BaseBdev2 00:07:42.186 BaseBdev3' 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.186 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.446 [2024-11-26 13:20:30.832725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:42.446 [2024-11-26 13:20:30.832755] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:42.446 [2024-11-26 13:20:30.832822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.446 "name": "Existed_Raid", 00:07:42.446 "uuid": "12a16b3f-7d99-4edc-b2a2-a53b2dabb452", 00:07:42.446 "strip_size_kb": 64, 00:07:42.446 "state": "offline", 00:07:42.446 "raid_level": "raid0", 00:07:42.446 "superblock": false, 00:07:42.446 "num_base_bdevs": 3, 00:07:42.446 "num_base_bdevs_discovered": 2, 00:07:42.446 "num_base_bdevs_operational": 2, 00:07:42.446 "base_bdevs_list": [ 00:07:42.446 { 00:07:42.446 "name": null, 00:07:42.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.446 "is_configured": false, 00:07:42.446 "data_offset": 0, 00:07:42.446 "data_size": 65536 00:07:42.446 }, 00:07:42.446 { 00:07:42.446 "name": "BaseBdev2", 00:07:42.446 "uuid": "4a517c71-726a-4d00-97ee-30c590a87b01", 00:07:42.446 "is_configured": true, 00:07:42.446 "data_offset": 0, 00:07:42.446 "data_size": 65536 00:07:42.446 }, 00:07:42.446 { 00:07:42.446 "name": "BaseBdev3", 00:07:42.446 "uuid": "b42879b8-1642-4b49-b885-2fb2979ddc3e", 00:07:42.446 "is_configured": true, 00:07:42.446 "data_offset": 0, 00:07:42.446 "data_size": 65536 00:07:42.446 } 00:07:42.446 ] 00:07:42.446 }' 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.446 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.015 [2024-11-26 13:20:31.465178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.015 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.274 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:43.274 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:43.274 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:43.274 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.274 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.274 [2024-11-26 13:20:31.589479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:43.275 [2024-11-26 13:20:31.589552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.275 BaseBdev2 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.275 [ 00:07:43.275 { 00:07:43.275 "name": "BaseBdev2", 00:07:43.275 "aliases": [ 00:07:43.275 "e0728110-1ccd-4187-940e-bc1c200b3590" 00:07:43.275 ], 00:07:43.275 "product_name": "Malloc disk", 00:07:43.275 "block_size": 512, 00:07:43.275 "num_blocks": 65536, 00:07:43.275 "uuid": "e0728110-1ccd-4187-940e-bc1c200b3590", 00:07:43.275 "assigned_rate_limits": { 00:07:43.275 "rw_ios_per_sec": 0, 00:07:43.275 "rw_mbytes_per_sec": 0, 00:07:43.275 "r_mbytes_per_sec": 0, 00:07:43.275 "w_mbytes_per_sec": 0 00:07:43.275 }, 00:07:43.275 "claimed": false, 00:07:43.275 "zoned": false, 00:07:43.275 "supported_io_types": { 00:07:43.275 "read": true, 00:07:43.275 "write": true, 00:07:43.275 "unmap": true, 00:07:43.275 "flush": true, 00:07:43.275 "reset": true, 00:07:43.275 "nvme_admin": false, 00:07:43.275 "nvme_io": false, 00:07:43.275 "nvme_io_md": false, 00:07:43.275 "write_zeroes": true, 00:07:43.275 "zcopy": true, 00:07:43.275 "get_zone_info": false, 00:07:43.275 "zone_management": false, 00:07:43.275 "zone_append": false, 00:07:43.275 "compare": false, 00:07:43.275 "compare_and_write": false, 00:07:43.275 "abort": true, 00:07:43.275 "seek_hole": false, 00:07:43.275 "seek_data": false, 00:07:43.275 "copy": true, 00:07:43.275 "nvme_iov_md": false 00:07:43.275 }, 00:07:43.275 "memory_domains": [ 00:07:43.275 { 00:07:43.275 "dma_device_id": "system", 00:07:43.275 "dma_device_type": 1 00:07:43.275 }, 00:07:43.275 { 00:07:43.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.275 "dma_device_type": 2 00:07:43.275 } 00:07:43.275 ], 00:07:43.275 "driver_specific": {} 00:07:43.275 } 00:07:43.275 ] 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.275 BaseBdev3 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.275 [ 00:07:43.275 { 00:07:43.275 "name": "BaseBdev3", 00:07:43.275 "aliases": [ 00:07:43.275 "b4d79568-d873-4bec-a8eb-e12a2022f630" 00:07:43.275 ], 00:07:43.275 "product_name": "Malloc disk", 00:07:43.275 "block_size": 512, 00:07:43.275 "num_blocks": 65536, 00:07:43.275 "uuid": "b4d79568-d873-4bec-a8eb-e12a2022f630", 00:07:43.275 "assigned_rate_limits": { 00:07:43.275 "rw_ios_per_sec": 0, 00:07:43.275 "rw_mbytes_per_sec": 0, 00:07:43.275 "r_mbytes_per_sec": 0, 00:07:43.275 "w_mbytes_per_sec": 0 00:07:43.275 }, 00:07:43.275 "claimed": false, 00:07:43.275 "zoned": false, 00:07:43.275 "supported_io_types": { 00:07:43.275 "read": true, 00:07:43.275 "write": true, 00:07:43.275 "unmap": true, 00:07:43.275 "flush": true, 00:07:43.275 "reset": true, 00:07:43.275 "nvme_admin": false, 00:07:43.275 "nvme_io": false, 00:07:43.275 "nvme_io_md": false, 00:07:43.275 "write_zeroes": true, 00:07:43.275 "zcopy": true, 00:07:43.275 "get_zone_info": false, 00:07:43.275 "zone_management": false, 00:07:43.275 "zone_append": false, 00:07:43.275 "compare": false, 00:07:43.275 "compare_and_write": false, 00:07:43.275 "abort": true, 00:07:43.275 "seek_hole": false, 00:07:43.275 "seek_data": false, 00:07:43.275 "copy": true, 00:07:43.275 "nvme_iov_md": false 00:07:43.275 }, 00:07:43.275 "memory_domains": [ 00:07:43.275 { 00:07:43.275 "dma_device_id": "system", 00:07:43.275 "dma_device_type": 1 00:07:43.275 }, 00:07:43.275 { 00:07:43.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.275 "dma_device_type": 2 00:07:43.275 } 00:07:43.275 ], 00:07:43.275 "driver_specific": {} 00:07:43.275 } 00:07:43.275 ] 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.275 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.534 [2024-11-26 13:20:31.839923] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:43.534 [2024-11-26 13:20:31.839995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:43.534 [2024-11-26 13:20:31.840023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.534 [2024-11-26 13:20:31.842046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:43.534 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.534 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:43.534 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.534 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.534 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.534 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.534 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:43.534 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.534 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.534 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.534 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.534 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.534 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.534 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.534 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.535 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.535 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.535 "name": "Existed_Raid", 00:07:43.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.535 "strip_size_kb": 64, 00:07:43.535 "state": "configuring", 00:07:43.535 "raid_level": "raid0", 00:07:43.535 "superblock": false, 00:07:43.535 "num_base_bdevs": 3, 00:07:43.535 "num_base_bdevs_discovered": 2, 00:07:43.535 "num_base_bdevs_operational": 3, 00:07:43.535 "base_bdevs_list": [ 00:07:43.535 { 00:07:43.535 "name": "BaseBdev1", 00:07:43.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.535 "is_configured": false, 00:07:43.535 "data_offset": 0, 00:07:43.535 "data_size": 0 00:07:43.535 }, 00:07:43.535 { 00:07:43.535 "name": "BaseBdev2", 00:07:43.535 "uuid": "e0728110-1ccd-4187-940e-bc1c200b3590", 00:07:43.535 "is_configured": true, 00:07:43.535 "data_offset": 0, 00:07:43.535 "data_size": 65536 00:07:43.535 }, 00:07:43.535 { 00:07:43.535 "name": "BaseBdev3", 00:07:43.535 "uuid": "b4d79568-d873-4bec-a8eb-e12a2022f630", 00:07:43.535 "is_configured": true, 00:07:43.535 "data_offset": 0, 00:07:43.535 "data_size": 65536 00:07:43.535 } 00:07:43.535 ] 00:07:43.535 }' 00:07:43.535 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.535 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.794 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:43.794 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.794 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.053 [2024-11-26 13:20:32.360004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.053 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.053 "name": "Existed_Raid", 00:07:44.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.053 "strip_size_kb": 64, 00:07:44.053 "state": "configuring", 00:07:44.053 "raid_level": "raid0", 00:07:44.053 "superblock": false, 00:07:44.053 "num_base_bdevs": 3, 00:07:44.053 "num_base_bdevs_discovered": 1, 00:07:44.053 "num_base_bdevs_operational": 3, 00:07:44.053 "base_bdevs_list": [ 00:07:44.053 { 00:07:44.053 "name": "BaseBdev1", 00:07:44.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.053 "is_configured": false, 00:07:44.053 "data_offset": 0, 00:07:44.053 "data_size": 0 00:07:44.053 }, 00:07:44.053 { 00:07:44.053 "name": null, 00:07:44.053 "uuid": "e0728110-1ccd-4187-940e-bc1c200b3590", 00:07:44.053 "is_configured": false, 00:07:44.053 "data_offset": 0, 00:07:44.053 "data_size": 65536 00:07:44.054 }, 00:07:44.054 { 00:07:44.054 "name": "BaseBdev3", 00:07:44.054 "uuid": "b4d79568-d873-4bec-a8eb-e12a2022f630", 00:07:44.054 "is_configured": true, 00:07:44.054 "data_offset": 0, 00:07:44.054 "data_size": 65536 00:07:44.054 } 00:07:44.054 ] 00:07:44.054 }' 00:07:44.054 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.054 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.623 [2024-11-26 13:20:32.976091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.623 BaseBdev1 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.623 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.623 [ 00:07:44.623 { 00:07:44.623 "name": "BaseBdev1", 00:07:44.623 "aliases": [ 00:07:44.623 "8a47a4d3-033d-45a5-a7b8-4999752da093" 00:07:44.623 ], 00:07:44.623 "product_name": "Malloc disk", 00:07:44.623 "block_size": 512, 00:07:44.623 "num_blocks": 65536, 00:07:44.623 "uuid": "8a47a4d3-033d-45a5-a7b8-4999752da093", 00:07:44.623 "assigned_rate_limits": { 00:07:44.623 "rw_ios_per_sec": 0, 00:07:44.623 "rw_mbytes_per_sec": 0, 00:07:44.623 "r_mbytes_per_sec": 0, 00:07:44.623 "w_mbytes_per_sec": 0 00:07:44.623 }, 00:07:44.623 "claimed": true, 00:07:44.623 "claim_type": "exclusive_write", 00:07:44.623 "zoned": false, 00:07:44.623 "supported_io_types": { 00:07:44.623 "read": true, 00:07:44.623 "write": true, 00:07:44.623 "unmap": true, 00:07:44.623 "flush": true, 00:07:44.623 "reset": true, 00:07:44.623 "nvme_admin": false, 00:07:44.623 "nvme_io": false, 00:07:44.623 "nvme_io_md": false, 00:07:44.623 "write_zeroes": true, 00:07:44.623 "zcopy": true, 00:07:44.623 "get_zone_info": false, 00:07:44.623 "zone_management": false, 00:07:44.623 "zone_append": false, 00:07:44.623 "compare": false, 00:07:44.623 "compare_and_write": false, 00:07:44.623 "abort": true, 00:07:44.623 "seek_hole": false, 00:07:44.623 "seek_data": false, 00:07:44.623 "copy": true, 00:07:44.623 "nvme_iov_md": false 00:07:44.623 }, 00:07:44.623 "memory_domains": [ 00:07:44.623 { 00:07:44.623 "dma_device_id": "system", 00:07:44.623 "dma_device_type": 1 00:07:44.623 }, 00:07:44.623 { 00:07:44.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.623 "dma_device_type": 2 00:07:44.623 } 00:07:44.623 ], 00:07:44.623 "driver_specific": {} 00:07:44.623 } 00:07:44.623 ] 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.623 "name": "Existed_Raid", 00:07:44.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.623 "strip_size_kb": 64, 00:07:44.623 "state": "configuring", 00:07:44.623 "raid_level": "raid0", 00:07:44.623 "superblock": false, 00:07:44.623 "num_base_bdevs": 3, 00:07:44.623 "num_base_bdevs_discovered": 2, 00:07:44.623 "num_base_bdevs_operational": 3, 00:07:44.623 "base_bdevs_list": [ 00:07:44.623 { 00:07:44.623 "name": "BaseBdev1", 00:07:44.623 "uuid": "8a47a4d3-033d-45a5-a7b8-4999752da093", 00:07:44.623 "is_configured": true, 00:07:44.623 "data_offset": 0, 00:07:44.623 "data_size": 65536 00:07:44.623 }, 00:07:44.623 { 00:07:44.623 "name": null, 00:07:44.623 "uuid": "e0728110-1ccd-4187-940e-bc1c200b3590", 00:07:44.623 "is_configured": false, 00:07:44.623 "data_offset": 0, 00:07:44.623 "data_size": 65536 00:07:44.623 }, 00:07:44.623 { 00:07:44.623 "name": "BaseBdev3", 00:07:44.623 "uuid": "b4d79568-d873-4bec-a8eb-e12a2022f630", 00:07:44.623 "is_configured": true, 00:07:44.623 "data_offset": 0, 00:07:44.623 "data_size": 65536 00:07:44.623 } 00:07:44.623 ] 00:07:44.623 }' 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.623 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.191 [2024-11-26 13:20:33.556216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.191 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.191 "name": "Existed_Raid", 00:07:45.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.191 "strip_size_kb": 64, 00:07:45.191 "state": "configuring", 00:07:45.191 "raid_level": "raid0", 00:07:45.191 "superblock": false, 00:07:45.191 "num_base_bdevs": 3, 00:07:45.191 "num_base_bdevs_discovered": 1, 00:07:45.191 "num_base_bdevs_operational": 3, 00:07:45.191 "base_bdevs_list": [ 00:07:45.191 { 00:07:45.191 "name": "BaseBdev1", 00:07:45.191 "uuid": "8a47a4d3-033d-45a5-a7b8-4999752da093", 00:07:45.191 "is_configured": true, 00:07:45.191 "data_offset": 0, 00:07:45.191 "data_size": 65536 00:07:45.191 }, 00:07:45.191 { 00:07:45.191 "name": null, 00:07:45.191 "uuid": "e0728110-1ccd-4187-940e-bc1c200b3590", 00:07:45.191 "is_configured": false, 00:07:45.191 "data_offset": 0, 00:07:45.191 "data_size": 65536 00:07:45.191 }, 00:07:45.191 { 00:07:45.191 "name": null, 00:07:45.191 "uuid": "b4d79568-d873-4bec-a8eb-e12a2022f630", 00:07:45.191 "is_configured": false, 00:07:45.191 "data_offset": 0, 00:07:45.191 "data_size": 65536 00:07:45.192 } 00:07:45.192 ] 00:07:45.192 }' 00:07:45.192 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.192 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.760 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:45.760 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.760 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.760 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.760 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.760 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:45.760 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:45.760 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.760 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.760 [2024-11-26 13:20:34.128395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:45.760 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.760 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:45.760 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.761 "name": "Existed_Raid", 00:07:45.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.761 "strip_size_kb": 64, 00:07:45.761 "state": "configuring", 00:07:45.761 "raid_level": "raid0", 00:07:45.761 "superblock": false, 00:07:45.761 "num_base_bdevs": 3, 00:07:45.761 "num_base_bdevs_discovered": 2, 00:07:45.761 "num_base_bdevs_operational": 3, 00:07:45.761 "base_bdevs_list": [ 00:07:45.761 { 00:07:45.761 "name": "BaseBdev1", 00:07:45.761 "uuid": "8a47a4d3-033d-45a5-a7b8-4999752da093", 00:07:45.761 "is_configured": true, 00:07:45.761 "data_offset": 0, 00:07:45.761 "data_size": 65536 00:07:45.761 }, 00:07:45.761 { 00:07:45.761 "name": null, 00:07:45.761 "uuid": "e0728110-1ccd-4187-940e-bc1c200b3590", 00:07:45.761 "is_configured": false, 00:07:45.761 "data_offset": 0, 00:07:45.761 "data_size": 65536 00:07:45.761 }, 00:07:45.761 { 00:07:45.761 "name": "BaseBdev3", 00:07:45.761 "uuid": "b4d79568-d873-4bec-a8eb-e12a2022f630", 00:07:45.761 "is_configured": true, 00:07:45.761 "data_offset": 0, 00:07:45.761 "data_size": 65536 00:07:45.761 } 00:07:45.761 ] 00:07:45.761 }' 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.761 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.329 [2024-11-26 13:20:34.700559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.329 "name": "Existed_Raid", 00:07:46.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.329 "strip_size_kb": 64, 00:07:46.329 "state": "configuring", 00:07:46.329 "raid_level": "raid0", 00:07:46.329 "superblock": false, 00:07:46.329 "num_base_bdevs": 3, 00:07:46.329 "num_base_bdevs_discovered": 1, 00:07:46.329 "num_base_bdevs_operational": 3, 00:07:46.329 "base_bdevs_list": [ 00:07:46.329 { 00:07:46.329 "name": null, 00:07:46.329 "uuid": "8a47a4d3-033d-45a5-a7b8-4999752da093", 00:07:46.329 "is_configured": false, 00:07:46.329 "data_offset": 0, 00:07:46.329 "data_size": 65536 00:07:46.329 }, 00:07:46.329 { 00:07:46.329 "name": null, 00:07:46.329 "uuid": "e0728110-1ccd-4187-940e-bc1c200b3590", 00:07:46.329 "is_configured": false, 00:07:46.329 "data_offset": 0, 00:07:46.329 "data_size": 65536 00:07:46.329 }, 00:07:46.329 { 00:07:46.329 "name": "BaseBdev3", 00:07:46.329 "uuid": "b4d79568-d873-4bec-a8eb-e12a2022f630", 00:07:46.329 "is_configured": true, 00:07:46.329 "data_offset": 0, 00:07:46.329 "data_size": 65536 00:07:46.329 } 00:07:46.329 ] 00:07:46.329 }' 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.329 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.927 [2024-11-26 13:20:35.336854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.927 "name": "Existed_Raid", 00:07:46.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.927 "strip_size_kb": 64, 00:07:46.927 "state": "configuring", 00:07:46.927 "raid_level": "raid0", 00:07:46.927 "superblock": false, 00:07:46.927 "num_base_bdevs": 3, 00:07:46.927 "num_base_bdevs_discovered": 2, 00:07:46.927 "num_base_bdevs_operational": 3, 00:07:46.927 "base_bdevs_list": [ 00:07:46.927 { 00:07:46.927 "name": null, 00:07:46.927 "uuid": "8a47a4d3-033d-45a5-a7b8-4999752da093", 00:07:46.927 "is_configured": false, 00:07:46.927 "data_offset": 0, 00:07:46.927 "data_size": 65536 00:07:46.927 }, 00:07:46.927 { 00:07:46.927 "name": "BaseBdev2", 00:07:46.927 "uuid": "e0728110-1ccd-4187-940e-bc1c200b3590", 00:07:46.927 "is_configured": true, 00:07:46.927 "data_offset": 0, 00:07:46.927 "data_size": 65536 00:07:46.927 }, 00:07:46.927 { 00:07:46.927 "name": "BaseBdev3", 00:07:46.927 "uuid": "b4d79568-d873-4bec-a8eb-e12a2022f630", 00:07:46.927 "is_configured": true, 00:07:46.927 "data_offset": 0, 00:07:46.927 "data_size": 65536 00:07:46.927 } 00:07:46.927 ] 00:07:46.927 }' 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.927 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8a47a4d3-033d-45a5-a7b8-4999752da093 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.504 [2024-11-26 13:20:35.986454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:47.504 [2024-11-26 13:20:35.986513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:47.504 [2024-11-26 13:20:35.986527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:47.504 [2024-11-26 13:20:35.986831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:47.504 [2024-11-26 13:20:35.987001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:47.504 [2024-11-26 13:20:35.987025] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:07:47.504 [2024-11-26 13:20:35.987296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.504 NewBaseBdev 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:47.504 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:47.505 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:47.505 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:47.505 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:47.505 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.505 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.505 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.505 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:47.505 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.505 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.505 [ 00:07:47.505 { 00:07:47.505 "name": "NewBaseBdev", 00:07:47.505 "aliases": [ 00:07:47.505 "8a47a4d3-033d-45a5-a7b8-4999752da093" 00:07:47.505 ], 00:07:47.505 "product_name": "Malloc disk", 00:07:47.505 "block_size": 512, 00:07:47.505 "num_blocks": 65536, 00:07:47.505 "uuid": "8a47a4d3-033d-45a5-a7b8-4999752da093", 00:07:47.505 "assigned_rate_limits": { 00:07:47.505 "rw_ios_per_sec": 0, 00:07:47.505 "rw_mbytes_per_sec": 0, 00:07:47.505 "r_mbytes_per_sec": 0, 00:07:47.505 "w_mbytes_per_sec": 0 00:07:47.505 }, 00:07:47.505 "claimed": true, 00:07:47.505 "claim_type": "exclusive_write", 00:07:47.505 "zoned": false, 00:07:47.505 "supported_io_types": { 00:07:47.505 "read": true, 00:07:47.505 "write": true, 00:07:47.505 "unmap": true, 00:07:47.505 "flush": true, 00:07:47.505 "reset": true, 00:07:47.505 "nvme_admin": false, 00:07:47.505 "nvme_io": false, 00:07:47.505 "nvme_io_md": false, 00:07:47.505 "write_zeroes": true, 00:07:47.505 "zcopy": true, 00:07:47.505 "get_zone_info": false, 00:07:47.505 "zone_management": false, 00:07:47.505 "zone_append": false, 00:07:47.505 "compare": false, 00:07:47.505 "compare_and_write": false, 00:07:47.505 "abort": true, 00:07:47.505 "seek_hole": false, 00:07:47.505 "seek_data": false, 00:07:47.505 "copy": true, 00:07:47.505 "nvme_iov_md": false 00:07:47.505 }, 00:07:47.505 "memory_domains": [ 00:07:47.505 { 00:07:47.505 "dma_device_id": "system", 00:07:47.505 "dma_device_type": 1 00:07:47.505 }, 00:07:47.505 { 00:07:47.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.505 "dma_device_type": 2 00:07:47.505 } 00:07:47.505 ], 00:07:47.505 "driver_specific": {} 00:07:47.505 } 00:07:47.505 ] 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.505 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.505 "name": "Existed_Raid", 00:07:47.505 "uuid": "5c8ae6e7-835c-4fe2-9d50-ceb40ce5bbfd", 00:07:47.505 "strip_size_kb": 64, 00:07:47.505 "state": "online", 00:07:47.505 "raid_level": "raid0", 00:07:47.505 "superblock": false, 00:07:47.505 "num_base_bdevs": 3, 00:07:47.505 "num_base_bdevs_discovered": 3, 00:07:47.505 "num_base_bdevs_operational": 3, 00:07:47.505 "base_bdevs_list": [ 00:07:47.505 { 00:07:47.505 "name": "NewBaseBdev", 00:07:47.505 "uuid": "8a47a4d3-033d-45a5-a7b8-4999752da093", 00:07:47.505 "is_configured": true, 00:07:47.505 "data_offset": 0, 00:07:47.505 "data_size": 65536 00:07:47.505 }, 00:07:47.505 { 00:07:47.505 "name": "BaseBdev2", 00:07:47.505 "uuid": "e0728110-1ccd-4187-940e-bc1c200b3590", 00:07:47.505 "is_configured": true, 00:07:47.505 "data_offset": 0, 00:07:47.505 "data_size": 65536 00:07:47.505 }, 00:07:47.505 { 00:07:47.505 "name": "BaseBdev3", 00:07:47.505 "uuid": "b4d79568-d873-4bec-a8eb-e12a2022f630", 00:07:47.505 "is_configured": true, 00:07:47.505 "data_offset": 0, 00:07:47.505 "data_size": 65536 00:07:47.505 } 00:07:47.505 ] 00:07:47.505 }' 00:07:47.764 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.764 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.022 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:48.022 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:48.022 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:48.022 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:48.022 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:48.022 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:48.022 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:48.022 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:48.022 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.022 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.022 [2024-11-26 13:20:36.534963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.022 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.022 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:48.022 "name": "Existed_Raid", 00:07:48.022 "aliases": [ 00:07:48.022 "5c8ae6e7-835c-4fe2-9d50-ceb40ce5bbfd" 00:07:48.022 ], 00:07:48.022 "product_name": "Raid Volume", 00:07:48.022 "block_size": 512, 00:07:48.022 "num_blocks": 196608, 00:07:48.022 "uuid": "5c8ae6e7-835c-4fe2-9d50-ceb40ce5bbfd", 00:07:48.022 "assigned_rate_limits": { 00:07:48.022 "rw_ios_per_sec": 0, 00:07:48.022 "rw_mbytes_per_sec": 0, 00:07:48.022 "r_mbytes_per_sec": 0, 00:07:48.022 "w_mbytes_per_sec": 0 00:07:48.022 }, 00:07:48.022 "claimed": false, 00:07:48.022 "zoned": false, 00:07:48.022 "supported_io_types": { 00:07:48.022 "read": true, 00:07:48.022 "write": true, 00:07:48.022 "unmap": true, 00:07:48.022 "flush": true, 00:07:48.022 "reset": true, 00:07:48.022 "nvme_admin": false, 00:07:48.022 "nvme_io": false, 00:07:48.022 "nvme_io_md": false, 00:07:48.022 "write_zeroes": true, 00:07:48.022 "zcopy": false, 00:07:48.022 "get_zone_info": false, 00:07:48.022 "zone_management": false, 00:07:48.022 "zone_append": false, 00:07:48.022 "compare": false, 00:07:48.022 "compare_and_write": false, 00:07:48.022 "abort": false, 00:07:48.022 "seek_hole": false, 00:07:48.022 "seek_data": false, 00:07:48.022 "copy": false, 00:07:48.022 "nvme_iov_md": false 00:07:48.022 }, 00:07:48.022 "memory_domains": [ 00:07:48.022 { 00:07:48.022 "dma_device_id": "system", 00:07:48.022 "dma_device_type": 1 00:07:48.022 }, 00:07:48.022 { 00:07:48.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.022 "dma_device_type": 2 00:07:48.022 }, 00:07:48.022 { 00:07:48.022 "dma_device_id": "system", 00:07:48.022 "dma_device_type": 1 00:07:48.022 }, 00:07:48.022 { 00:07:48.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.022 "dma_device_type": 2 00:07:48.022 }, 00:07:48.022 { 00:07:48.022 "dma_device_id": "system", 00:07:48.022 "dma_device_type": 1 00:07:48.022 }, 00:07:48.022 { 00:07:48.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.022 "dma_device_type": 2 00:07:48.022 } 00:07:48.022 ], 00:07:48.022 "driver_specific": { 00:07:48.022 "raid": { 00:07:48.022 "uuid": "5c8ae6e7-835c-4fe2-9d50-ceb40ce5bbfd", 00:07:48.022 "strip_size_kb": 64, 00:07:48.022 "state": "online", 00:07:48.022 "raid_level": "raid0", 00:07:48.022 "superblock": false, 00:07:48.022 "num_base_bdevs": 3, 00:07:48.022 "num_base_bdevs_discovered": 3, 00:07:48.022 "num_base_bdevs_operational": 3, 00:07:48.022 "base_bdevs_list": [ 00:07:48.022 { 00:07:48.022 "name": "NewBaseBdev", 00:07:48.022 "uuid": "8a47a4d3-033d-45a5-a7b8-4999752da093", 00:07:48.022 "is_configured": true, 00:07:48.022 "data_offset": 0, 00:07:48.022 "data_size": 65536 00:07:48.022 }, 00:07:48.022 { 00:07:48.022 "name": "BaseBdev2", 00:07:48.022 "uuid": "e0728110-1ccd-4187-940e-bc1c200b3590", 00:07:48.022 "is_configured": true, 00:07:48.022 "data_offset": 0, 00:07:48.022 "data_size": 65536 00:07:48.022 }, 00:07:48.022 { 00:07:48.022 "name": "BaseBdev3", 00:07:48.022 "uuid": "b4d79568-d873-4bec-a8eb-e12a2022f630", 00:07:48.022 "is_configured": true, 00:07:48.022 "data_offset": 0, 00:07:48.022 "data_size": 65536 00:07:48.022 } 00:07:48.022 ] 00:07:48.022 } 00:07:48.022 } 00:07:48.022 }' 00:07:48.022 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:48.282 BaseBdev2 00:07:48.282 BaseBdev3' 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.282 [2024-11-26 13:20:36.834783] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:48.282 [2024-11-26 13:20:36.834810] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.282 [2024-11-26 13:20:36.834876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.282 [2024-11-26 13:20:36.834930] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.282 [2024-11-26 13:20:36.834946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63327 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63327 ']' 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63327 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:48.282 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.541 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63327 00:07:48.541 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.541 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.541 killing process with pid 63327 00:07:48.541 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63327' 00:07:48.541 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63327 00:07:48.541 [2024-11-26 13:20:36.874969] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.541 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63327 00:07:48.541 [2024-11-26 13:20:37.076343] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:49.478 00:07:49.478 real 0m11.152s 00:07:49.478 user 0m18.808s 00:07:49.478 sys 0m1.553s 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.478 ************************************ 00:07:49.478 END TEST raid_state_function_test 00:07:49.478 ************************************ 00:07:49.478 13:20:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:07:49.478 13:20:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:49.478 13:20:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.478 13:20:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.478 ************************************ 00:07:49.478 START TEST raid_state_function_test_sb 00:07:49.478 ************************************ 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:49.478 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:49.479 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:49.479 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63955 00:07:49.479 Process raid pid: 63955 00:07:49.479 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63955' 00:07:49.479 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63955 00:07:49.479 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63955 ']' 00:07:49.479 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.479 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.479 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.479 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.479 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.479 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.738 [2024-11-26 13:20:38.089483] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:07:49.738 [2024-11-26 13:20:38.089699] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.738 [2024-11-26 13:20:38.262801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.997 [2024-11-26 13:20:38.367960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.997 [2024-11-26 13:20:38.541541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.997 [2024-11-26 13:20:38.541579] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.566 [2024-11-26 13:20:38.965833] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.566 [2024-11-26 13:20:38.965919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.566 [2024-11-26 13:20:38.965934] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.566 [2024-11-26 13:20:38.965948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.566 [2024-11-26 13:20:38.965957] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:50.566 [2024-11-26 13:20:38.965968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.566 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.566 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.566 "name": "Existed_Raid", 00:07:50.566 "uuid": "bcf25709-f4c4-4928-8355-488f84c6b41f", 00:07:50.566 "strip_size_kb": 64, 00:07:50.566 "state": "configuring", 00:07:50.566 "raid_level": "raid0", 00:07:50.566 "superblock": true, 00:07:50.566 "num_base_bdevs": 3, 00:07:50.566 "num_base_bdevs_discovered": 0, 00:07:50.566 "num_base_bdevs_operational": 3, 00:07:50.566 "base_bdevs_list": [ 00:07:50.566 { 00:07:50.566 "name": "BaseBdev1", 00:07:50.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.566 "is_configured": false, 00:07:50.566 "data_offset": 0, 00:07:50.566 "data_size": 0 00:07:50.566 }, 00:07:50.566 { 00:07:50.566 "name": "BaseBdev2", 00:07:50.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.566 "is_configured": false, 00:07:50.566 "data_offset": 0, 00:07:50.566 "data_size": 0 00:07:50.566 }, 00:07:50.566 { 00:07:50.566 "name": "BaseBdev3", 00:07:50.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.566 "is_configured": false, 00:07:50.566 "data_offset": 0, 00:07:50.566 "data_size": 0 00:07:50.566 } 00:07:50.566 ] 00:07:50.566 }' 00:07:50.566 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.566 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.132 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.132 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.132 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.132 [2024-11-26 13:20:39.445850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.132 [2024-11-26 13:20:39.445901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:51.132 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.132 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:51.132 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.132 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.132 [2024-11-26 13:20:39.453864] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.132 [2024-11-26 13:20:39.453910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.132 [2024-11-26 13:20:39.453943] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.132 [2024-11-26 13:20:39.453955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.132 [2024-11-26 13:20:39.453963] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:51.132 [2024-11-26 13:20:39.453974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:51.132 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.133 [2024-11-26 13:20:39.492009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.133 BaseBdev1 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.133 [ 00:07:51.133 { 00:07:51.133 "name": "BaseBdev1", 00:07:51.133 "aliases": [ 00:07:51.133 "ff8ef1cc-9445-4fdc-a35e-8c2e17519246" 00:07:51.133 ], 00:07:51.133 "product_name": "Malloc disk", 00:07:51.133 "block_size": 512, 00:07:51.133 "num_blocks": 65536, 00:07:51.133 "uuid": "ff8ef1cc-9445-4fdc-a35e-8c2e17519246", 00:07:51.133 "assigned_rate_limits": { 00:07:51.133 "rw_ios_per_sec": 0, 00:07:51.133 "rw_mbytes_per_sec": 0, 00:07:51.133 "r_mbytes_per_sec": 0, 00:07:51.133 "w_mbytes_per_sec": 0 00:07:51.133 }, 00:07:51.133 "claimed": true, 00:07:51.133 "claim_type": "exclusive_write", 00:07:51.133 "zoned": false, 00:07:51.133 "supported_io_types": { 00:07:51.133 "read": true, 00:07:51.133 "write": true, 00:07:51.133 "unmap": true, 00:07:51.133 "flush": true, 00:07:51.133 "reset": true, 00:07:51.133 "nvme_admin": false, 00:07:51.133 "nvme_io": false, 00:07:51.133 "nvme_io_md": false, 00:07:51.133 "write_zeroes": true, 00:07:51.133 "zcopy": true, 00:07:51.133 "get_zone_info": false, 00:07:51.133 "zone_management": false, 00:07:51.133 "zone_append": false, 00:07:51.133 "compare": false, 00:07:51.133 "compare_and_write": false, 00:07:51.133 "abort": true, 00:07:51.133 "seek_hole": false, 00:07:51.133 "seek_data": false, 00:07:51.133 "copy": true, 00:07:51.133 "nvme_iov_md": false 00:07:51.133 }, 00:07:51.133 "memory_domains": [ 00:07:51.133 { 00:07:51.133 "dma_device_id": "system", 00:07:51.133 "dma_device_type": 1 00:07:51.133 }, 00:07:51.133 { 00:07:51.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.133 "dma_device_type": 2 00:07:51.133 } 00:07:51.133 ], 00:07:51.133 "driver_specific": {} 00:07:51.133 } 00:07:51.133 ] 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.133 "name": "Existed_Raid", 00:07:51.133 "uuid": "24d0f069-e696-4f04-8d07-370898c308ac", 00:07:51.133 "strip_size_kb": 64, 00:07:51.133 "state": "configuring", 00:07:51.133 "raid_level": "raid0", 00:07:51.133 "superblock": true, 00:07:51.133 "num_base_bdevs": 3, 00:07:51.133 "num_base_bdevs_discovered": 1, 00:07:51.133 "num_base_bdevs_operational": 3, 00:07:51.133 "base_bdevs_list": [ 00:07:51.133 { 00:07:51.133 "name": "BaseBdev1", 00:07:51.133 "uuid": "ff8ef1cc-9445-4fdc-a35e-8c2e17519246", 00:07:51.133 "is_configured": true, 00:07:51.133 "data_offset": 2048, 00:07:51.133 "data_size": 63488 00:07:51.133 }, 00:07:51.133 { 00:07:51.133 "name": "BaseBdev2", 00:07:51.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.133 "is_configured": false, 00:07:51.133 "data_offset": 0, 00:07:51.133 "data_size": 0 00:07:51.133 }, 00:07:51.133 { 00:07:51.133 "name": "BaseBdev3", 00:07:51.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.133 "is_configured": false, 00:07:51.133 "data_offset": 0, 00:07:51.133 "data_size": 0 00:07:51.133 } 00:07:51.133 ] 00:07:51.133 }' 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.133 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.702 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.702 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.702 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.702 [2024-11-26 13:20:40.004123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.702 [2024-11-26 13:20:40.004168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.702 [2024-11-26 13:20:40.012187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.702 [2024-11-26 13:20:40.014319] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.702 [2024-11-26 13:20:40.014386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.702 [2024-11-26 13:20:40.014430] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:51.702 [2024-11-26 13:20:40.014443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.702 "name": "Existed_Raid", 00:07:51.702 "uuid": "bf977274-41c2-4575-8c1d-52c42a721ccd", 00:07:51.702 "strip_size_kb": 64, 00:07:51.702 "state": "configuring", 00:07:51.702 "raid_level": "raid0", 00:07:51.702 "superblock": true, 00:07:51.702 "num_base_bdevs": 3, 00:07:51.702 "num_base_bdevs_discovered": 1, 00:07:51.702 "num_base_bdevs_operational": 3, 00:07:51.702 "base_bdevs_list": [ 00:07:51.702 { 00:07:51.702 "name": "BaseBdev1", 00:07:51.702 "uuid": "ff8ef1cc-9445-4fdc-a35e-8c2e17519246", 00:07:51.702 "is_configured": true, 00:07:51.702 "data_offset": 2048, 00:07:51.702 "data_size": 63488 00:07:51.702 }, 00:07:51.702 { 00:07:51.702 "name": "BaseBdev2", 00:07:51.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.702 "is_configured": false, 00:07:51.702 "data_offset": 0, 00:07:51.702 "data_size": 0 00:07:51.702 }, 00:07:51.702 { 00:07:51.702 "name": "BaseBdev3", 00:07:51.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.702 "is_configured": false, 00:07:51.702 "data_offset": 0, 00:07:51.702 "data_size": 0 00:07:51.702 } 00:07:51.702 ] 00:07:51.702 }' 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.702 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.961 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:51.961 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.961 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.220 [2024-11-26 13:20:40.548857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.220 BaseBdev2 00:07:52.220 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.220 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:52.220 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:52.220 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.220 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:52.220 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.220 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.220 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:52.220 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.220 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.220 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.220 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:52.220 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.220 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.220 [ 00:07:52.220 { 00:07:52.220 "name": "BaseBdev2", 00:07:52.220 "aliases": [ 00:07:52.220 "2e93aab1-6110-42e3-a862-edce568a6ee0" 00:07:52.220 ], 00:07:52.220 "product_name": "Malloc disk", 00:07:52.220 "block_size": 512, 00:07:52.220 "num_blocks": 65536, 00:07:52.220 "uuid": "2e93aab1-6110-42e3-a862-edce568a6ee0", 00:07:52.220 "assigned_rate_limits": { 00:07:52.220 "rw_ios_per_sec": 0, 00:07:52.220 "rw_mbytes_per_sec": 0, 00:07:52.220 "r_mbytes_per_sec": 0, 00:07:52.220 "w_mbytes_per_sec": 0 00:07:52.220 }, 00:07:52.220 "claimed": true, 00:07:52.220 "claim_type": "exclusive_write", 00:07:52.220 "zoned": false, 00:07:52.220 "supported_io_types": { 00:07:52.220 "read": true, 00:07:52.220 "write": true, 00:07:52.220 "unmap": true, 00:07:52.220 "flush": true, 00:07:52.220 "reset": true, 00:07:52.220 "nvme_admin": false, 00:07:52.220 "nvme_io": false, 00:07:52.220 "nvme_io_md": false, 00:07:52.220 "write_zeroes": true, 00:07:52.221 "zcopy": true, 00:07:52.221 "get_zone_info": false, 00:07:52.221 "zone_management": false, 00:07:52.221 "zone_append": false, 00:07:52.221 "compare": false, 00:07:52.221 "compare_and_write": false, 00:07:52.221 "abort": true, 00:07:52.221 "seek_hole": false, 00:07:52.221 "seek_data": false, 00:07:52.221 "copy": true, 00:07:52.221 "nvme_iov_md": false 00:07:52.221 }, 00:07:52.221 "memory_domains": [ 00:07:52.221 { 00:07:52.221 "dma_device_id": "system", 00:07:52.221 "dma_device_type": 1 00:07:52.221 }, 00:07:52.221 { 00:07:52.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.221 "dma_device_type": 2 00:07:52.221 } 00:07:52.221 ], 00:07:52.221 "driver_specific": {} 00:07:52.221 } 00:07:52.221 ] 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.221 "name": "Existed_Raid", 00:07:52.221 "uuid": "bf977274-41c2-4575-8c1d-52c42a721ccd", 00:07:52.221 "strip_size_kb": 64, 00:07:52.221 "state": "configuring", 00:07:52.221 "raid_level": "raid0", 00:07:52.221 "superblock": true, 00:07:52.221 "num_base_bdevs": 3, 00:07:52.221 "num_base_bdevs_discovered": 2, 00:07:52.221 "num_base_bdevs_operational": 3, 00:07:52.221 "base_bdevs_list": [ 00:07:52.221 { 00:07:52.221 "name": "BaseBdev1", 00:07:52.221 "uuid": "ff8ef1cc-9445-4fdc-a35e-8c2e17519246", 00:07:52.221 "is_configured": true, 00:07:52.221 "data_offset": 2048, 00:07:52.221 "data_size": 63488 00:07:52.221 }, 00:07:52.221 { 00:07:52.221 "name": "BaseBdev2", 00:07:52.221 "uuid": "2e93aab1-6110-42e3-a862-edce568a6ee0", 00:07:52.221 "is_configured": true, 00:07:52.221 "data_offset": 2048, 00:07:52.221 "data_size": 63488 00:07:52.221 }, 00:07:52.221 { 00:07:52.221 "name": "BaseBdev3", 00:07:52.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.221 "is_configured": false, 00:07:52.221 "data_offset": 0, 00:07:52.221 "data_size": 0 00:07:52.221 } 00:07:52.221 ] 00:07:52.221 }' 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.221 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.789 [2024-11-26 13:20:41.111078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:52.789 [2024-11-26 13:20:41.111403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.789 [2024-11-26 13:20:41.111440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:52.789 BaseBdev3 00:07:52.789 [2024-11-26 13:20:41.111790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:52.789 [2024-11-26 13:20:41.111999] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.789 [2024-11-26 13:20:41.112022] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:52.789 [2024-11-26 13:20:41.112191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.789 [ 00:07:52.789 { 00:07:52.789 "name": "BaseBdev3", 00:07:52.789 "aliases": [ 00:07:52.789 "2f19ada0-a82f-4419-940c-e8b25694c568" 00:07:52.789 ], 00:07:52.789 "product_name": "Malloc disk", 00:07:52.789 "block_size": 512, 00:07:52.789 "num_blocks": 65536, 00:07:52.789 "uuid": "2f19ada0-a82f-4419-940c-e8b25694c568", 00:07:52.789 "assigned_rate_limits": { 00:07:52.789 "rw_ios_per_sec": 0, 00:07:52.789 "rw_mbytes_per_sec": 0, 00:07:52.789 "r_mbytes_per_sec": 0, 00:07:52.789 "w_mbytes_per_sec": 0 00:07:52.789 }, 00:07:52.789 "claimed": true, 00:07:52.789 "claim_type": "exclusive_write", 00:07:52.789 "zoned": false, 00:07:52.789 "supported_io_types": { 00:07:52.789 "read": true, 00:07:52.789 "write": true, 00:07:52.789 "unmap": true, 00:07:52.789 "flush": true, 00:07:52.789 "reset": true, 00:07:52.789 "nvme_admin": false, 00:07:52.789 "nvme_io": false, 00:07:52.789 "nvme_io_md": false, 00:07:52.789 "write_zeroes": true, 00:07:52.789 "zcopy": true, 00:07:52.789 "get_zone_info": false, 00:07:52.789 "zone_management": false, 00:07:52.789 "zone_append": false, 00:07:52.789 "compare": false, 00:07:52.789 "compare_and_write": false, 00:07:52.789 "abort": true, 00:07:52.789 "seek_hole": false, 00:07:52.789 "seek_data": false, 00:07:52.789 "copy": true, 00:07:52.789 "nvme_iov_md": false 00:07:52.789 }, 00:07:52.789 "memory_domains": [ 00:07:52.789 { 00:07:52.789 "dma_device_id": "system", 00:07:52.789 "dma_device_type": 1 00:07:52.789 }, 00:07:52.789 { 00:07:52.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.789 "dma_device_type": 2 00:07:52.789 } 00:07:52.789 ], 00:07:52.789 "driver_specific": {} 00:07:52.789 } 00:07:52.789 ] 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:52.789 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.790 "name": "Existed_Raid", 00:07:52.790 "uuid": "bf977274-41c2-4575-8c1d-52c42a721ccd", 00:07:52.790 "strip_size_kb": 64, 00:07:52.790 "state": "online", 00:07:52.790 "raid_level": "raid0", 00:07:52.790 "superblock": true, 00:07:52.790 "num_base_bdevs": 3, 00:07:52.790 "num_base_bdevs_discovered": 3, 00:07:52.790 "num_base_bdevs_operational": 3, 00:07:52.790 "base_bdevs_list": [ 00:07:52.790 { 00:07:52.790 "name": "BaseBdev1", 00:07:52.790 "uuid": "ff8ef1cc-9445-4fdc-a35e-8c2e17519246", 00:07:52.790 "is_configured": true, 00:07:52.790 "data_offset": 2048, 00:07:52.790 "data_size": 63488 00:07:52.790 }, 00:07:52.790 { 00:07:52.790 "name": "BaseBdev2", 00:07:52.790 "uuid": "2e93aab1-6110-42e3-a862-edce568a6ee0", 00:07:52.790 "is_configured": true, 00:07:52.790 "data_offset": 2048, 00:07:52.790 "data_size": 63488 00:07:52.790 }, 00:07:52.790 { 00:07:52.790 "name": "BaseBdev3", 00:07:52.790 "uuid": "2f19ada0-a82f-4419-940c-e8b25694c568", 00:07:52.790 "is_configured": true, 00:07:52.790 "data_offset": 2048, 00:07:52.790 "data_size": 63488 00:07:52.790 } 00:07:52.790 ] 00:07:52.790 }' 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.790 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.363 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:53.363 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:53.363 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:53.363 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:53.363 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:53.363 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:53.363 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:53.363 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.363 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.363 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:53.363 [2024-11-26 13:20:41.655588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.363 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.363 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:53.363 "name": "Existed_Raid", 00:07:53.363 "aliases": [ 00:07:53.363 "bf977274-41c2-4575-8c1d-52c42a721ccd" 00:07:53.363 ], 00:07:53.363 "product_name": "Raid Volume", 00:07:53.363 "block_size": 512, 00:07:53.363 "num_blocks": 190464, 00:07:53.363 "uuid": "bf977274-41c2-4575-8c1d-52c42a721ccd", 00:07:53.363 "assigned_rate_limits": { 00:07:53.363 "rw_ios_per_sec": 0, 00:07:53.363 "rw_mbytes_per_sec": 0, 00:07:53.363 "r_mbytes_per_sec": 0, 00:07:53.363 "w_mbytes_per_sec": 0 00:07:53.363 }, 00:07:53.363 "claimed": false, 00:07:53.363 "zoned": false, 00:07:53.363 "supported_io_types": { 00:07:53.363 "read": true, 00:07:53.363 "write": true, 00:07:53.363 "unmap": true, 00:07:53.363 "flush": true, 00:07:53.363 "reset": true, 00:07:53.363 "nvme_admin": false, 00:07:53.363 "nvme_io": false, 00:07:53.363 "nvme_io_md": false, 00:07:53.363 "write_zeroes": true, 00:07:53.363 "zcopy": false, 00:07:53.363 "get_zone_info": false, 00:07:53.363 "zone_management": false, 00:07:53.363 "zone_append": false, 00:07:53.363 "compare": false, 00:07:53.363 "compare_and_write": false, 00:07:53.363 "abort": false, 00:07:53.363 "seek_hole": false, 00:07:53.363 "seek_data": false, 00:07:53.363 "copy": false, 00:07:53.363 "nvme_iov_md": false 00:07:53.363 }, 00:07:53.363 "memory_domains": [ 00:07:53.363 { 00:07:53.363 "dma_device_id": "system", 00:07:53.363 "dma_device_type": 1 00:07:53.363 }, 00:07:53.363 { 00:07:53.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.363 "dma_device_type": 2 00:07:53.363 }, 00:07:53.363 { 00:07:53.363 "dma_device_id": "system", 00:07:53.364 "dma_device_type": 1 00:07:53.364 }, 00:07:53.364 { 00:07:53.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.364 "dma_device_type": 2 00:07:53.364 }, 00:07:53.364 { 00:07:53.364 "dma_device_id": "system", 00:07:53.364 "dma_device_type": 1 00:07:53.364 }, 00:07:53.364 { 00:07:53.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.364 "dma_device_type": 2 00:07:53.364 } 00:07:53.364 ], 00:07:53.364 "driver_specific": { 00:07:53.364 "raid": { 00:07:53.364 "uuid": "bf977274-41c2-4575-8c1d-52c42a721ccd", 00:07:53.364 "strip_size_kb": 64, 00:07:53.364 "state": "online", 00:07:53.364 "raid_level": "raid0", 00:07:53.364 "superblock": true, 00:07:53.364 "num_base_bdevs": 3, 00:07:53.364 "num_base_bdevs_discovered": 3, 00:07:53.364 "num_base_bdevs_operational": 3, 00:07:53.364 "base_bdevs_list": [ 00:07:53.364 { 00:07:53.364 "name": "BaseBdev1", 00:07:53.364 "uuid": "ff8ef1cc-9445-4fdc-a35e-8c2e17519246", 00:07:53.364 "is_configured": true, 00:07:53.364 "data_offset": 2048, 00:07:53.364 "data_size": 63488 00:07:53.364 }, 00:07:53.364 { 00:07:53.364 "name": "BaseBdev2", 00:07:53.364 "uuid": "2e93aab1-6110-42e3-a862-edce568a6ee0", 00:07:53.364 "is_configured": true, 00:07:53.364 "data_offset": 2048, 00:07:53.364 "data_size": 63488 00:07:53.364 }, 00:07:53.364 { 00:07:53.364 "name": "BaseBdev3", 00:07:53.364 "uuid": "2f19ada0-a82f-4419-940c-e8b25694c568", 00:07:53.364 "is_configured": true, 00:07:53.364 "data_offset": 2048, 00:07:53.364 "data_size": 63488 00:07:53.364 } 00:07:53.364 ] 00:07:53.364 } 00:07:53.364 } 00:07:53.364 }' 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:53.364 BaseBdev2 00:07:53.364 BaseBdev3' 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.364 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.623 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.623 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.623 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.623 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:53.623 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.623 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.623 [2024-11-26 13:20:41.975369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:53.623 [2024-11-26 13:20:41.975398] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.623 [2024-11-26 13:20:41.975466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.623 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.623 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:53.623 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:53.623 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:53.623 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:53.623 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:53.623 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:53.623 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.623 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:53.623 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.623 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.623 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.623 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.623 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.623 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.624 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.624 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.624 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.624 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.624 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.624 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.624 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.624 "name": "Existed_Raid", 00:07:53.624 "uuid": "bf977274-41c2-4575-8c1d-52c42a721ccd", 00:07:53.624 "strip_size_kb": 64, 00:07:53.624 "state": "offline", 00:07:53.624 "raid_level": "raid0", 00:07:53.624 "superblock": true, 00:07:53.624 "num_base_bdevs": 3, 00:07:53.624 "num_base_bdevs_discovered": 2, 00:07:53.624 "num_base_bdevs_operational": 2, 00:07:53.624 "base_bdevs_list": [ 00:07:53.624 { 00:07:53.624 "name": null, 00:07:53.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.624 "is_configured": false, 00:07:53.624 "data_offset": 0, 00:07:53.624 "data_size": 63488 00:07:53.624 }, 00:07:53.624 { 00:07:53.624 "name": "BaseBdev2", 00:07:53.624 "uuid": "2e93aab1-6110-42e3-a862-edce568a6ee0", 00:07:53.624 "is_configured": true, 00:07:53.624 "data_offset": 2048, 00:07:53.624 "data_size": 63488 00:07:53.624 }, 00:07:53.624 { 00:07:53.624 "name": "BaseBdev3", 00:07:53.624 "uuid": "2f19ada0-a82f-4419-940c-e8b25694c568", 00:07:53.624 "is_configured": true, 00:07:53.624 "data_offset": 2048, 00:07:53.624 "data_size": 63488 00:07:53.624 } 00:07:53.624 ] 00:07:53.624 }' 00:07:53.624 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.624 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.193 [2024-11-26 13:20:42.608132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.193 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.193 [2024-11-26 13:20:42.732226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:54.193 [2024-11-26 13:20:42.732351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.453 BaseBdev2 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.453 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.453 [ 00:07:54.453 { 00:07:54.453 "name": "BaseBdev2", 00:07:54.453 "aliases": [ 00:07:54.453 "6d91d108-39b4-47ce-948b-ce12a9b72676" 00:07:54.453 ], 00:07:54.453 "product_name": "Malloc disk", 00:07:54.453 "block_size": 512, 00:07:54.453 "num_blocks": 65536, 00:07:54.453 "uuid": "6d91d108-39b4-47ce-948b-ce12a9b72676", 00:07:54.454 "assigned_rate_limits": { 00:07:54.454 "rw_ios_per_sec": 0, 00:07:54.454 "rw_mbytes_per_sec": 0, 00:07:54.454 "r_mbytes_per_sec": 0, 00:07:54.454 "w_mbytes_per_sec": 0 00:07:54.454 }, 00:07:54.454 "claimed": false, 00:07:54.454 "zoned": false, 00:07:54.454 "supported_io_types": { 00:07:54.454 "read": true, 00:07:54.454 "write": true, 00:07:54.454 "unmap": true, 00:07:54.454 "flush": true, 00:07:54.454 "reset": true, 00:07:54.454 "nvme_admin": false, 00:07:54.454 "nvme_io": false, 00:07:54.454 "nvme_io_md": false, 00:07:54.454 "write_zeroes": true, 00:07:54.454 "zcopy": true, 00:07:54.454 "get_zone_info": false, 00:07:54.454 "zone_management": false, 00:07:54.454 "zone_append": false, 00:07:54.454 "compare": false, 00:07:54.454 "compare_and_write": false, 00:07:54.454 "abort": true, 00:07:54.454 "seek_hole": false, 00:07:54.454 "seek_data": false, 00:07:54.454 "copy": true, 00:07:54.454 "nvme_iov_md": false 00:07:54.454 }, 00:07:54.454 "memory_domains": [ 00:07:54.454 { 00:07:54.454 "dma_device_id": "system", 00:07:54.454 "dma_device_type": 1 00:07:54.454 }, 00:07:54.454 { 00:07:54.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.454 "dma_device_type": 2 00:07:54.454 } 00:07:54.454 ], 00:07:54.454 "driver_specific": {} 00:07:54.454 } 00:07:54.454 ] 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.454 BaseBdev3 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.454 [ 00:07:54.454 { 00:07:54.454 "name": "BaseBdev3", 00:07:54.454 "aliases": [ 00:07:54.454 "f327e257-a45b-4659-a460-530433c9dbdf" 00:07:54.454 ], 00:07:54.454 "product_name": "Malloc disk", 00:07:54.454 "block_size": 512, 00:07:54.454 "num_blocks": 65536, 00:07:54.454 "uuid": "f327e257-a45b-4659-a460-530433c9dbdf", 00:07:54.454 "assigned_rate_limits": { 00:07:54.454 "rw_ios_per_sec": 0, 00:07:54.454 "rw_mbytes_per_sec": 0, 00:07:54.454 "r_mbytes_per_sec": 0, 00:07:54.454 "w_mbytes_per_sec": 0 00:07:54.454 }, 00:07:54.454 "claimed": false, 00:07:54.454 "zoned": false, 00:07:54.454 "supported_io_types": { 00:07:54.454 "read": true, 00:07:54.454 "write": true, 00:07:54.454 "unmap": true, 00:07:54.454 "flush": true, 00:07:54.454 "reset": true, 00:07:54.454 "nvme_admin": false, 00:07:54.454 "nvme_io": false, 00:07:54.454 "nvme_io_md": false, 00:07:54.454 "write_zeroes": true, 00:07:54.454 "zcopy": true, 00:07:54.454 "get_zone_info": false, 00:07:54.454 "zone_management": false, 00:07:54.454 "zone_append": false, 00:07:54.454 "compare": false, 00:07:54.454 "compare_and_write": false, 00:07:54.454 "abort": true, 00:07:54.454 "seek_hole": false, 00:07:54.454 "seek_data": false, 00:07:54.454 "copy": true, 00:07:54.454 "nvme_iov_md": false 00:07:54.454 }, 00:07:54.454 "memory_domains": [ 00:07:54.454 { 00:07:54.454 "dma_device_id": "system", 00:07:54.454 "dma_device_type": 1 00:07:54.454 }, 00:07:54.454 { 00:07:54.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.454 "dma_device_type": 2 00:07:54.454 } 00:07:54.454 ], 00:07:54.454 "driver_specific": {} 00:07:54.454 } 00:07:54.454 ] 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.454 [2024-11-26 13:20:42.971144] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:54.454 [2024-11-26 13:20:42.971197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:54.454 [2024-11-26 13:20:42.971239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:54.454 [2024-11-26 13:20:42.973416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.454 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.712 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.712 "name": "Existed_Raid", 00:07:54.712 "uuid": "0e32b367-92c3-4c1d-b925-265571f1f99f", 00:07:54.712 "strip_size_kb": 64, 00:07:54.712 "state": "configuring", 00:07:54.712 "raid_level": "raid0", 00:07:54.712 "superblock": true, 00:07:54.712 "num_base_bdevs": 3, 00:07:54.712 "num_base_bdevs_discovered": 2, 00:07:54.712 "num_base_bdevs_operational": 3, 00:07:54.712 "base_bdevs_list": [ 00:07:54.712 { 00:07:54.712 "name": "BaseBdev1", 00:07:54.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.712 "is_configured": false, 00:07:54.712 "data_offset": 0, 00:07:54.712 "data_size": 0 00:07:54.712 }, 00:07:54.712 { 00:07:54.712 "name": "BaseBdev2", 00:07:54.712 "uuid": "6d91d108-39b4-47ce-948b-ce12a9b72676", 00:07:54.712 "is_configured": true, 00:07:54.712 "data_offset": 2048, 00:07:54.712 "data_size": 63488 00:07:54.712 }, 00:07:54.712 { 00:07:54.712 "name": "BaseBdev3", 00:07:54.712 "uuid": "f327e257-a45b-4659-a460-530433c9dbdf", 00:07:54.712 "is_configured": true, 00:07:54.712 "data_offset": 2048, 00:07:54.712 "data_size": 63488 00:07:54.712 } 00:07:54.712 ] 00:07:54.712 }' 00:07:54.712 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.712 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.971 [2024-11-26 13:20:43.507215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.971 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.230 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.230 "name": "Existed_Raid", 00:07:55.230 "uuid": "0e32b367-92c3-4c1d-b925-265571f1f99f", 00:07:55.230 "strip_size_kb": 64, 00:07:55.230 "state": "configuring", 00:07:55.230 "raid_level": "raid0", 00:07:55.230 "superblock": true, 00:07:55.230 "num_base_bdevs": 3, 00:07:55.230 "num_base_bdevs_discovered": 1, 00:07:55.230 "num_base_bdevs_operational": 3, 00:07:55.230 "base_bdevs_list": [ 00:07:55.230 { 00:07:55.230 "name": "BaseBdev1", 00:07:55.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.230 "is_configured": false, 00:07:55.230 "data_offset": 0, 00:07:55.230 "data_size": 0 00:07:55.230 }, 00:07:55.230 { 00:07:55.230 "name": null, 00:07:55.230 "uuid": "6d91d108-39b4-47ce-948b-ce12a9b72676", 00:07:55.230 "is_configured": false, 00:07:55.230 "data_offset": 0, 00:07:55.230 "data_size": 63488 00:07:55.230 }, 00:07:55.230 { 00:07:55.230 "name": "BaseBdev3", 00:07:55.230 "uuid": "f327e257-a45b-4659-a460-530433c9dbdf", 00:07:55.230 "is_configured": true, 00:07:55.230 "data_offset": 2048, 00:07:55.230 "data_size": 63488 00:07:55.230 } 00:07:55.230 ] 00:07:55.230 }' 00:07:55.230 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.230 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.489 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.489 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.489 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.489 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:55.489 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.748 [2024-11-26 13:20:44.103015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.748 BaseBdev1 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.748 [ 00:07:55.748 { 00:07:55.748 "name": "BaseBdev1", 00:07:55.748 "aliases": [ 00:07:55.748 "54efb580-84e7-450a-8c7a-a4ccf35c751c" 00:07:55.748 ], 00:07:55.748 "product_name": "Malloc disk", 00:07:55.748 "block_size": 512, 00:07:55.748 "num_blocks": 65536, 00:07:55.748 "uuid": "54efb580-84e7-450a-8c7a-a4ccf35c751c", 00:07:55.748 "assigned_rate_limits": { 00:07:55.748 "rw_ios_per_sec": 0, 00:07:55.748 "rw_mbytes_per_sec": 0, 00:07:55.748 "r_mbytes_per_sec": 0, 00:07:55.748 "w_mbytes_per_sec": 0 00:07:55.748 }, 00:07:55.748 "claimed": true, 00:07:55.748 "claim_type": "exclusive_write", 00:07:55.748 "zoned": false, 00:07:55.748 "supported_io_types": { 00:07:55.748 "read": true, 00:07:55.748 "write": true, 00:07:55.748 "unmap": true, 00:07:55.748 "flush": true, 00:07:55.748 "reset": true, 00:07:55.748 "nvme_admin": false, 00:07:55.748 "nvme_io": false, 00:07:55.748 "nvme_io_md": false, 00:07:55.748 "write_zeroes": true, 00:07:55.748 "zcopy": true, 00:07:55.748 "get_zone_info": false, 00:07:55.748 "zone_management": false, 00:07:55.748 "zone_append": false, 00:07:55.748 "compare": false, 00:07:55.748 "compare_and_write": false, 00:07:55.748 "abort": true, 00:07:55.748 "seek_hole": false, 00:07:55.748 "seek_data": false, 00:07:55.748 "copy": true, 00:07:55.748 "nvme_iov_md": false 00:07:55.748 }, 00:07:55.748 "memory_domains": [ 00:07:55.748 { 00:07:55.748 "dma_device_id": "system", 00:07:55.748 "dma_device_type": 1 00:07:55.748 }, 00:07:55.748 { 00:07:55.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.748 "dma_device_type": 2 00:07:55.748 } 00:07:55.748 ], 00:07:55.748 "driver_specific": {} 00:07:55.748 } 00:07:55.748 ] 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.748 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.748 "name": "Existed_Raid", 00:07:55.748 "uuid": "0e32b367-92c3-4c1d-b925-265571f1f99f", 00:07:55.748 "strip_size_kb": 64, 00:07:55.748 "state": "configuring", 00:07:55.748 "raid_level": "raid0", 00:07:55.748 "superblock": true, 00:07:55.748 "num_base_bdevs": 3, 00:07:55.748 "num_base_bdevs_discovered": 2, 00:07:55.748 "num_base_bdevs_operational": 3, 00:07:55.748 "base_bdevs_list": [ 00:07:55.748 { 00:07:55.749 "name": "BaseBdev1", 00:07:55.749 "uuid": "54efb580-84e7-450a-8c7a-a4ccf35c751c", 00:07:55.749 "is_configured": true, 00:07:55.749 "data_offset": 2048, 00:07:55.749 "data_size": 63488 00:07:55.749 }, 00:07:55.749 { 00:07:55.749 "name": null, 00:07:55.749 "uuid": "6d91d108-39b4-47ce-948b-ce12a9b72676", 00:07:55.749 "is_configured": false, 00:07:55.749 "data_offset": 0, 00:07:55.749 "data_size": 63488 00:07:55.749 }, 00:07:55.749 { 00:07:55.749 "name": "BaseBdev3", 00:07:55.749 "uuid": "f327e257-a45b-4659-a460-530433c9dbdf", 00:07:55.749 "is_configured": true, 00:07:55.749 "data_offset": 2048, 00:07:55.749 "data_size": 63488 00:07:55.749 } 00:07:55.749 ] 00:07:55.749 }' 00:07:55.749 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.749 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.316 [2024-11-26 13:20:44.699180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.316 "name": "Existed_Raid", 00:07:56.316 "uuid": "0e32b367-92c3-4c1d-b925-265571f1f99f", 00:07:56.316 "strip_size_kb": 64, 00:07:56.316 "state": "configuring", 00:07:56.316 "raid_level": "raid0", 00:07:56.316 "superblock": true, 00:07:56.316 "num_base_bdevs": 3, 00:07:56.316 "num_base_bdevs_discovered": 1, 00:07:56.316 "num_base_bdevs_operational": 3, 00:07:56.316 "base_bdevs_list": [ 00:07:56.316 { 00:07:56.316 "name": "BaseBdev1", 00:07:56.316 "uuid": "54efb580-84e7-450a-8c7a-a4ccf35c751c", 00:07:56.316 "is_configured": true, 00:07:56.316 "data_offset": 2048, 00:07:56.316 "data_size": 63488 00:07:56.316 }, 00:07:56.316 { 00:07:56.316 "name": null, 00:07:56.316 "uuid": "6d91d108-39b4-47ce-948b-ce12a9b72676", 00:07:56.316 "is_configured": false, 00:07:56.316 "data_offset": 0, 00:07:56.316 "data_size": 63488 00:07:56.316 }, 00:07:56.316 { 00:07:56.316 "name": null, 00:07:56.316 "uuid": "f327e257-a45b-4659-a460-530433c9dbdf", 00:07:56.316 "is_configured": false, 00:07:56.316 "data_offset": 0, 00:07:56.316 "data_size": 63488 00:07:56.316 } 00:07:56.316 ] 00:07:56.316 }' 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.316 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.884 [2024-11-26 13:20:45.279338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.884 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.884 "name": "Existed_Raid", 00:07:56.884 "uuid": "0e32b367-92c3-4c1d-b925-265571f1f99f", 00:07:56.884 "strip_size_kb": 64, 00:07:56.884 "state": "configuring", 00:07:56.884 "raid_level": "raid0", 00:07:56.884 "superblock": true, 00:07:56.884 "num_base_bdevs": 3, 00:07:56.884 "num_base_bdevs_discovered": 2, 00:07:56.884 "num_base_bdevs_operational": 3, 00:07:56.884 "base_bdevs_list": [ 00:07:56.884 { 00:07:56.884 "name": "BaseBdev1", 00:07:56.884 "uuid": "54efb580-84e7-450a-8c7a-a4ccf35c751c", 00:07:56.884 "is_configured": true, 00:07:56.884 "data_offset": 2048, 00:07:56.884 "data_size": 63488 00:07:56.884 }, 00:07:56.884 { 00:07:56.885 "name": null, 00:07:56.885 "uuid": "6d91d108-39b4-47ce-948b-ce12a9b72676", 00:07:56.885 "is_configured": false, 00:07:56.885 "data_offset": 0, 00:07:56.885 "data_size": 63488 00:07:56.885 }, 00:07:56.885 { 00:07:56.885 "name": "BaseBdev3", 00:07:56.885 "uuid": "f327e257-a45b-4659-a460-530433c9dbdf", 00:07:56.885 "is_configured": true, 00:07:56.885 "data_offset": 2048, 00:07:56.885 "data_size": 63488 00:07:56.885 } 00:07:56.885 ] 00:07:56.885 }' 00:07:56.885 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.885 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.452 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:57.452 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.452 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.452 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.452 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.452 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:57.452 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:57.452 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.452 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.452 [2024-11-26 13:20:45.847492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.453 "name": "Existed_Raid", 00:07:57.453 "uuid": "0e32b367-92c3-4c1d-b925-265571f1f99f", 00:07:57.453 "strip_size_kb": 64, 00:07:57.453 "state": "configuring", 00:07:57.453 "raid_level": "raid0", 00:07:57.453 "superblock": true, 00:07:57.453 "num_base_bdevs": 3, 00:07:57.453 "num_base_bdevs_discovered": 1, 00:07:57.453 "num_base_bdevs_operational": 3, 00:07:57.453 "base_bdevs_list": [ 00:07:57.453 { 00:07:57.453 "name": null, 00:07:57.453 "uuid": "54efb580-84e7-450a-8c7a-a4ccf35c751c", 00:07:57.453 "is_configured": false, 00:07:57.453 "data_offset": 0, 00:07:57.453 "data_size": 63488 00:07:57.453 }, 00:07:57.453 { 00:07:57.453 "name": null, 00:07:57.453 "uuid": "6d91d108-39b4-47ce-948b-ce12a9b72676", 00:07:57.453 "is_configured": false, 00:07:57.453 "data_offset": 0, 00:07:57.453 "data_size": 63488 00:07:57.453 }, 00:07:57.453 { 00:07:57.453 "name": "BaseBdev3", 00:07:57.453 "uuid": "f327e257-a45b-4659-a460-530433c9dbdf", 00:07:57.453 "is_configured": true, 00:07:57.453 "data_offset": 2048, 00:07:57.453 "data_size": 63488 00:07:57.453 } 00:07:57.453 ] 00:07:57.453 }' 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.453 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.020 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.021 [2024-11-26 13:20:46.495841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.021 "name": "Existed_Raid", 00:07:58.021 "uuid": "0e32b367-92c3-4c1d-b925-265571f1f99f", 00:07:58.021 "strip_size_kb": 64, 00:07:58.021 "state": "configuring", 00:07:58.021 "raid_level": "raid0", 00:07:58.021 "superblock": true, 00:07:58.021 "num_base_bdevs": 3, 00:07:58.021 "num_base_bdevs_discovered": 2, 00:07:58.021 "num_base_bdevs_operational": 3, 00:07:58.021 "base_bdevs_list": [ 00:07:58.021 { 00:07:58.021 "name": null, 00:07:58.021 "uuid": "54efb580-84e7-450a-8c7a-a4ccf35c751c", 00:07:58.021 "is_configured": false, 00:07:58.021 "data_offset": 0, 00:07:58.021 "data_size": 63488 00:07:58.021 }, 00:07:58.021 { 00:07:58.021 "name": "BaseBdev2", 00:07:58.021 "uuid": "6d91d108-39b4-47ce-948b-ce12a9b72676", 00:07:58.021 "is_configured": true, 00:07:58.021 "data_offset": 2048, 00:07:58.021 "data_size": 63488 00:07:58.021 }, 00:07:58.021 { 00:07:58.021 "name": "BaseBdev3", 00:07:58.021 "uuid": "f327e257-a45b-4659-a460-530433c9dbdf", 00:07:58.021 "is_configured": true, 00:07:58.021 "data_offset": 2048, 00:07:58.021 "data_size": 63488 00:07:58.021 } 00:07:58.021 ] 00:07:58.021 }' 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.021 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 54efb580-84e7-450a-8c7a-a4ccf35c751c 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.588 [2024-11-26 13:20:47.132728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:58.588 [2024-11-26 13:20:47.132929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:58.588 [2024-11-26 13:20:47.132948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:58.588 [2024-11-26 13:20:47.133230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:58.588 NewBaseBdev 00:07:58.588 [2024-11-26 13:20:47.133425] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:58.588 [2024-11-26 13:20:47.133439] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:07:58.588 [2024-11-26 13:20:47.133576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.588 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.589 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.589 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:58.589 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.589 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.847 [ 00:07:58.847 { 00:07:58.847 "name": "NewBaseBdev", 00:07:58.847 "aliases": [ 00:07:58.847 "54efb580-84e7-450a-8c7a-a4ccf35c751c" 00:07:58.847 ], 00:07:58.847 "product_name": "Malloc disk", 00:07:58.847 "block_size": 512, 00:07:58.847 "num_blocks": 65536, 00:07:58.847 "uuid": "54efb580-84e7-450a-8c7a-a4ccf35c751c", 00:07:58.847 "assigned_rate_limits": { 00:07:58.847 "rw_ios_per_sec": 0, 00:07:58.847 "rw_mbytes_per_sec": 0, 00:07:58.847 "r_mbytes_per_sec": 0, 00:07:58.847 "w_mbytes_per_sec": 0 00:07:58.847 }, 00:07:58.847 "claimed": true, 00:07:58.847 "claim_type": "exclusive_write", 00:07:58.847 "zoned": false, 00:07:58.847 "supported_io_types": { 00:07:58.847 "read": true, 00:07:58.847 "write": true, 00:07:58.847 "unmap": true, 00:07:58.847 "flush": true, 00:07:58.847 "reset": true, 00:07:58.847 "nvme_admin": false, 00:07:58.847 "nvme_io": false, 00:07:58.847 "nvme_io_md": false, 00:07:58.847 "write_zeroes": true, 00:07:58.847 "zcopy": true, 00:07:58.847 "get_zone_info": false, 00:07:58.847 "zone_management": false, 00:07:58.847 "zone_append": false, 00:07:58.847 "compare": false, 00:07:58.847 "compare_and_write": false, 00:07:58.847 "abort": true, 00:07:58.847 "seek_hole": false, 00:07:58.847 "seek_data": false, 00:07:58.847 "copy": true, 00:07:58.847 "nvme_iov_md": false 00:07:58.847 }, 00:07:58.847 "memory_domains": [ 00:07:58.847 { 00:07:58.847 "dma_device_id": "system", 00:07:58.847 "dma_device_type": 1 00:07:58.847 }, 00:07:58.847 { 00:07:58.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.847 "dma_device_type": 2 00:07:58.847 } 00:07:58.847 ], 00:07:58.847 "driver_specific": {} 00:07:58.847 } 00:07:58.847 ] 00:07:58.847 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.847 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:58.847 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:58.847 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.848 "name": "Existed_Raid", 00:07:58.848 "uuid": "0e32b367-92c3-4c1d-b925-265571f1f99f", 00:07:58.848 "strip_size_kb": 64, 00:07:58.848 "state": "online", 00:07:58.848 "raid_level": "raid0", 00:07:58.848 "superblock": true, 00:07:58.848 "num_base_bdevs": 3, 00:07:58.848 "num_base_bdevs_discovered": 3, 00:07:58.848 "num_base_bdevs_operational": 3, 00:07:58.848 "base_bdevs_list": [ 00:07:58.848 { 00:07:58.848 "name": "NewBaseBdev", 00:07:58.848 "uuid": "54efb580-84e7-450a-8c7a-a4ccf35c751c", 00:07:58.848 "is_configured": true, 00:07:58.848 "data_offset": 2048, 00:07:58.848 "data_size": 63488 00:07:58.848 }, 00:07:58.848 { 00:07:58.848 "name": "BaseBdev2", 00:07:58.848 "uuid": "6d91d108-39b4-47ce-948b-ce12a9b72676", 00:07:58.848 "is_configured": true, 00:07:58.848 "data_offset": 2048, 00:07:58.848 "data_size": 63488 00:07:58.848 }, 00:07:58.848 { 00:07:58.848 "name": "BaseBdev3", 00:07:58.848 "uuid": "f327e257-a45b-4659-a460-530433c9dbdf", 00:07:58.848 "is_configured": true, 00:07:58.848 "data_offset": 2048, 00:07:58.848 "data_size": 63488 00:07:58.848 } 00:07:58.848 ] 00:07:58.848 }' 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.848 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.106 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:59.106 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.366 [2024-11-26 13:20:47.681132] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.366 "name": "Existed_Raid", 00:07:59.366 "aliases": [ 00:07:59.366 "0e32b367-92c3-4c1d-b925-265571f1f99f" 00:07:59.366 ], 00:07:59.366 "product_name": "Raid Volume", 00:07:59.366 "block_size": 512, 00:07:59.366 "num_blocks": 190464, 00:07:59.366 "uuid": "0e32b367-92c3-4c1d-b925-265571f1f99f", 00:07:59.366 "assigned_rate_limits": { 00:07:59.366 "rw_ios_per_sec": 0, 00:07:59.366 "rw_mbytes_per_sec": 0, 00:07:59.366 "r_mbytes_per_sec": 0, 00:07:59.366 "w_mbytes_per_sec": 0 00:07:59.366 }, 00:07:59.366 "claimed": false, 00:07:59.366 "zoned": false, 00:07:59.366 "supported_io_types": { 00:07:59.366 "read": true, 00:07:59.366 "write": true, 00:07:59.366 "unmap": true, 00:07:59.366 "flush": true, 00:07:59.366 "reset": true, 00:07:59.366 "nvme_admin": false, 00:07:59.366 "nvme_io": false, 00:07:59.366 "nvme_io_md": false, 00:07:59.366 "write_zeroes": true, 00:07:59.366 "zcopy": false, 00:07:59.366 "get_zone_info": false, 00:07:59.366 "zone_management": false, 00:07:59.366 "zone_append": false, 00:07:59.366 "compare": false, 00:07:59.366 "compare_and_write": false, 00:07:59.366 "abort": false, 00:07:59.366 "seek_hole": false, 00:07:59.366 "seek_data": false, 00:07:59.366 "copy": false, 00:07:59.366 "nvme_iov_md": false 00:07:59.366 }, 00:07:59.366 "memory_domains": [ 00:07:59.366 { 00:07:59.366 "dma_device_id": "system", 00:07:59.366 "dma_device_type": 1 00:07:59.366 }, 00:07:59.366 { 00:07:59.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.366 "dma_device_type": 2 00:07:59.366 }, 00:07:59.366 { 00:07:59.366 "dma_device_id": "system", 00:07:59.366 "dma_device_type": 1 00:07:59.366 }, 00:07:59.366 { 00:07:59.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.366 "dma_device_type": 2 00:07:59.366 }, 00:07:59.366 { 00:07:59.366 "dma_device_id": "system", 00:07:59.366 "dma_device_type": 1 00:07:59.366 }, 00:07:59.366 { 00:07:59.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.366 "dma_device_type": 2 00:07:59.366 } 00:07:59.366 ], 00:07:59.366 "driver_specific": { 00:07:59.366 "raid": { 00:07:59.366 "uuid": "0e32b367-92c3-4c1d-b925-265571f1f99f", 00:07:59.366 "strip_size_kb": 64, 00:07:59.366 "state": "online", 00:07:59.366 "raid_level": "raid0", 00:07:59.366 "superblock": true, 00:07:59.366 "num_base_bdevs": 3, 00:07:59.366 "num_base_bdevs_discovered": 3, 00:07:59.366 "num_base_bdevs_operational": 3, 00:07:59.366 "base_bdevs_list": [ 00:07:59.366 { 00:07:59.366 "name": "NewBaseBdev", 00:07:59.366 "uuid": "54efb580-84e7-450a-8c7a-a4ccf35c751c", 00:07:59.366 "is_configured": true, 00:07:59.366 "data_offset": 2048, 00:07:59.366 "data_size": 63488 00:07:59.366 }, 00:07:59.366 { 00:07:59.366 "name": "BaseBdev2", 00:07:59.366 "uuid": "6d91d108-39b4-47ce-948b-ce12a9b72676", 00:07:59.366 "is_configured": true, 00:07:59.366 "data_offset": 2048, 00:07:59.366 "data_size": 63488 00:07:59.366 }, 00:07:59.366 { 00:07:59.366 "name": "BaseBdev3", 00:07:59.366 "uuid": "f327e257-a45b-4659-a460-530433c9dbdf", 00:07:59.366 "is_configured": true, 00:07:59.366 "data_offset": 2048, 00:07:59.366 "data_size": 63488 00:07:59.366 } 00:07:59.366 ] 00:07:59.366 } 00:07:59.366 } 00:07:59.366 }' 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:59.366 BaseBdev2 00:07:59.366 BaseBdev3' 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.366 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.626 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.626 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.626 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.626 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:59.626 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.626 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.626 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.626 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.626 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.626 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.626 13:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:59.626 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.626 13:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.626 [2024-11-26 13:20:48.000943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:59.626 [2024-11-26 13:20:48.000967] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.626 [2024-11-26 13:20:48.001027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.626 [2024-11-26 13:20:48.001077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.626 [2024-11-26 13:20:48.001093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:59.626 13:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.626 13:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63955 00:07:59.626 13:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63955 ']' 00:07:59.626 13:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63955 00:07:59.626 13:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:59.626 13:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.626 13:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63955 00:07:59.626 13:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.626 13:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.626 killing process with pid 63955 00:07:59.626 13:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63955' 00:07:59.626 13:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63955 00:07:59.626 [2024-11-26 13:20:48.039146] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.626 13:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63955 00:07:59.885 [2024-11-26 13:20:48.245681] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.822 13:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:00.822 00:08:00.822 real 0m11.101s 00:08:00.822 user 0m18.769s 00:08:00.822 sys 0m1.530s 00:08:00.822 13:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.822 ************************************ 00:08:00.822 END TEST raid_state_function_test_sb 00:08:00.822 ************************************ 00:08:00.822 13:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.822 13:20:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:00.822 13:20:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:00.822 13:20:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.822 13:20:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.822 ************************************ 00:08:00.822 START TEST raid_superblock_test 00:08:00.822 ************************************ 00:08:00.822 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:00.822 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:00.822 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:00.822 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:00.822 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:00.822 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:00.822 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:00.822 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:00.822 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:00.822 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64585 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64585 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64585 ']' 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.823 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.823 [2024-11-26 13:20:49.243823] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:08:00.823 [2024-11-26 13:20:49.244013] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64585 ] 00:08:01.081 [2024-11-26 13:20:49.421449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.081 [2024-11-26 13:20:49.518291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.340 [2024-11-26 13:20:49.689890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.340 [2024-11-26 13:20:49.689950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.599 malloc1 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.599 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.599 [2024-11-26 13:20:50.161934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:01.599 [2024-11-26 13:20:50.162000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.599 [2024-11-26 13:20:50.162030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:01.600 [2024-11-26 13:20:50.162044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.860 [2024-11-26 13:20:50.164365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.860 [2024-11-26 13:20:50.164403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:01.860 pt1 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.860 malloc2 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.860 [2024-11-26 13:20:50.213855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:01.860 [2024-11-26 13:20:50.213905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.860 [2024-11-26 13:20:50.213930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:01.860 [2024-11-26 13:20:50.213943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.860 [2024-11-26 13:20:50.216412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.860 [2024-11-26 13:20:50.216448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:01.860 pt2 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.860 malloc3 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.860 [2024-11-26 13:20:50.269096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:01.860 [2024-11-26 13:20:50.269166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.860 [2024-11-26 13:20:50.269195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:01.860 [2024-11-26 13:20:50.269209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.860 [2024-11-26 13:20:50.271626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.860 [2024-11-26 13:20:50.271663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:01.860 pt3 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.860 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.860 [2024-11-26 13:20:50.281151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:01.860 [2024-11-26 13:20:50.283332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:01.861 [2024-11-26 13:20:50.283417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:01.861 [2024-11-26 13:20:50.283615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:01.861 [2024-11-26 13:20:50.283635] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:01.861 [2024-11-26 13:20:50.283884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:01.861 [2024-11-26 13:20:50.284081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:01.861 [2024-11-26 13:20:50.284103] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:01.861 [2024-11-26 13:20:50.284314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.861 "name": "raid_bdev1", 00:08:01.861 "uuid": "4d43ba78-c080-4387-bc5d-b31a6a833d7c", 00:08:01.861 "strip_size_kb": 64, 00:08:01.861 "state": "online", 00:08:01.861 "raid_level": "raid0", 00:08:01.861 "superblock": true, 00:08:01.861 "num_base_bdevs": 3, 00:08:01.861 "num_base_bdevs_discovered": 3, 00:08:01.861 "num_base_bdevs_operational": 3, 00:08:01.861 "base_bdevs_list": [ 00:08:01.861 { 00:08:01.861 "name": "pt1", 00:08:01.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:01.861 "is_configured": true, 00:08:01.861 "data_offset": 2048, 00:08:01.861 "data_size": 63488 00:08:01.861 }, 00:08:01.861 { 00:08:01.861 "name": "pt2", 00:08:01.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:01.861 "is_configured": true, 00:08:01.861 "data_offset": 2048, 00:08:01.861 "data_size": 63488 00:08:01.861 }, 00:08:01.861 { 00:08:01.861 "name": "pt3", 00:08:01.861 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:01.861 "is_configured": true, 00:08:01.861 "data_offset": 2048, 00:08:01.861 "data_size": 63488 00:08:01.861 } 00:08:01.861 ] 00:08:01.861 }' 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.861 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.429 [2024-11-26 13:20:50.765578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:02.429 "name": "raid_bdev1", 00:08:02.429 "aliases": [ 00:08:02.429 "4d43ba78-c080-4387-bc5d-b31a6a833d7c" 00:08:02.429 ], 00:08:02.429 "product_name": "Raid Volume", 00:08:02.429 "block_size": 512, 00:08:02.429 "num_blocks": 190464, 00:08:02.429 "uuid": "4d43ba78-c080-4387-bc5d-b31a6a833d7c", 00:08:02.429 "assigned_rate_limits": { 00:08:02.429 "rw_ios_per_sec": 0, 00:08:02.429 "rw_mbytes_per_sec": 0, 00:08:02.429 "r_mbytes_per_sec": 0, 00:08:02.429 "w_mbytes_per_sec": 0 00:08:02.429 }, 00:08:02.429 "claimed": false, 00:08:02.429 "zoned": false, 00:08:02.429 "supported_io_types": { 00:08:02.429 "read": true, 00:08:02.429 "write": true, 00:08:02.429 "unmap": true, 00:08:02.429 "flush": true, 00:08:02.429 "reset": true, 00:08:02.429 "nvme_admin": false, 00:08:02.429 "nvme_io": false, 00:08:02.429 "nvme_io_md": false, 00:08:02.429 "write_zeroes": true, 00:08:02.429 "zcopy": false, 00:08:02.429 "get_zone_info": false, 00:08:02.429 "zone_management": false, 00:08:02.429 "zone_append": false, 00:08:02.429 "compare": false, 00:08:02.429 "compare_and_write": false, 00:08:02.429 "abort": false, 00:08:02.429 "seek_hole": false, 00:08:02.429 "seek_data": false, 00:08:02.429 "copy": false, 00:08:02.429 "nvme_iov_md": false 00:08:02.429 }, 00:08:02.429 "memory_domains": [ 00:08:02.429 { 00:08:02.429 "dma_device_id": "system", 00:08:02.429 "dma_device_type": 1 00:08:02.429 }, 00:08:02.429 { 00:08:02.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.429 "dma_device_type": 2 00:08:02.429 }, 00:08:02.429 { 00:08:02.429 "dma_device_id": "system", 00:08:02.429 "dma_device_type": 1 00:08:02.429 }, 00:08:02.429 { 00:08:02.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.429 "dma_device_type": 2 00:08:02.429 }, 00:08:02.429 { 00:08:02.429 "dma_device_id": "system", 00:08:02.429 "dma_device_type": 1 00:08:02.429 }, 00:08:02.429 { 00:08:02.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.429 "dma_device_type": 2 00:08:02.429 } 00:08:02.429 ], 00:08:02.429 "driver_specific": { 00:08:02.429 "raid": { 00:08:02.429 "uuid": "4d43ba78-c080-4387-bc5d-b31a6a833d7c", 00:08:02.429 "strip_size_kb": 64, 00:08:02.429 "state": "online", 00:08:02.429 "raid_level": "raid0", 00:08:02.429 "superblock": true, 00:08:02.429 "num_base_bdevs": 3, 00:08:02.429 "num_base_bdevs_discovered": 3, 00:08:02.429 "num_base_bdevs_operational": 3, 00:08:02.429 "base_bdevs_list": [ 00:08:02.429 { 00:08:02.429 "name": "pt1", 00:08:02.429 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.429 "is_configured": true, 00:08:02.429 "data_offset": 2048, 00:08:02.429 "data_size": 63488 00:08:02.429 }, 00:08:02.429 { 00:08:02.429 "name": "pt2", 00:08:02.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.429 "is_configured": true, 00:08:02.429 "data_offset": 2048, 00:08:02.429 "data_size": 63488 00:08:02.429 }, 00:08:02.429 { 00:08:02.429 "name": "pt3", 00:08:02.429 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:02.429 "is_configured": true, 00:08:02.429 "data_offset": 2048, 00:08:02.429 "data_size": 63488 00:08:02.429 } 00:08:02.429 ] 00:08:02.429 } 00:08:02.429 } 00:08:02.429 }' 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:02.429 pt2 00:08:02.429 pt3' 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.429 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.430 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.430 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.430 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:02.430 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.430 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.430 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.430 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:02.689 [2024-11-26 13:20:51.077614] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4d43ba78-c080-4387-bc5d-b31a6a833d7c 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4d43ba78-c080-4387-bc5d-b31a6a833d7c ']' 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.689 [2024-11-26 13:20:51.129308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.689 [2024-11-26 13:20:51.129346] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.689 [2024-11-26 13:20:51.129414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.689 [2024-11-26 13:20:51.129476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.689 [2024-11-26 13:20:51.129491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:02.689 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.948 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:02.948 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:02.948 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.949 [2024-11-26 13:20:51.277385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:02.949 [2024-11-26 13:20:51.279672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:02.949 [2024-11-26 13:20:51.279787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:02.949 [2024-11-26 13:20:51.279845] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:02.949 [2024-11-26 13:20:51.279920] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:02.949 [2024-11-26 13:20:51.279953] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:02.949 [2024-11-26 13:20:51.279979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.949 [2024-11-26 13:20:51.279994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:02.949 request: 00:08:02.949 { 00:08:02.949 "name": "raid_bdev1", 00:08:02.949 "raid_level": "raid0", 00:08:02.949 "base_bdevs": [ 00:08:02.949 "malloc1", 00:08:02.949 "malloc2", 00:08:02.949 "malloc3" 00:08:02.949 ], 00:08:02.949 "strip_size_kb": 64, 00:08:02.949 "superblock": false, 00:08:02.949 "method": "bdev_raid_create", 00:08:02.949 "req_id": 1 00:08:02.949 } 00:08:02.949 Got JSON-RPC error response 00:08:02.949 response: 00:08:02.949 { 00:08:02.949 "code": -17, 00:08:02.949 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:02.949 } 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.949 [2024-11-26 13:20:51.341372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:02.949 [2024-11-26 13:20:51.341438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.949 [2024-11-26 13:20:51.341462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:02.949 [2024-11-26 13:20:51.341476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.949 [2024-11-26 13:20:51.343941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.949 [2024-11-26 13:20:51.343982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:02.949 [2024-11-26 13:20:51.344070] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:02.949 [2024-11-26 13:20:51.344133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:02.949 pt1 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.949 "name": "raid_bdev1", 00:08:02.949 "uuid": "4d43ba78-c080-4387-bc5d-b31a6a833d7c", 00:08:02.949 "strip_size_kb": 64, 00:08:02.949 "state": "configuring", 00:08:02.949 "raid_level": "raid0", 00:08:02.949 "superblock": true, 00:08:02.949 "num_base_bdevs": 3, 00:08:02.949 "num_base_bdevs_discovered": 1, 00:08:02.949 "num_base_bdevs_operational": 3, 00:08:02.949 "base_bdevs_list": [ 00:08:02.949 { 00:08:02.949 "name": "pt1", 00:08:02.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.949 "is_configured": true, 00:08:02.949 "data_offset": 2048, 00:08:02.949 "data_size": 63488 00:08:02.949 }, 00:08:02.949 { 00:08:02.949 "name": null, 00:08:02.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.949 "is_configured": false, 00:08:02.949 "data_offset": 2048, 00:08:02.949 "data_size": 63488 00:08:02.949 }, 00:08:02.949 { 00:08:02.949 "name": null, 00:08:02.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:02.949 "is_configured": false, 00:08:02.949 "data_offset": 2048, 00:08:02.949 "data_size": 63488 00:08:02.949 } 00:08:02.949 ] 00:08:02.949 }' 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.949 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.518 [2024-11-26 13:20:51.857474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:03.518 [2024-11-26 13:20:51.857542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.518 [2024-11-26 13:20:51.857566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:03.518 [2024-11-26 13:20:51.857578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.518 [2024-11-26 13:20:51.858001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.518 [2024-11-26 13:20:51.858038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:03.518 [2024-11-26 13:20:51.858143] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:03.518 [2024-11-26 13:20:51.858169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:03.518 pt2 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.518 [2024-11-26 13:20:51.865506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.518 "name": "raid_bdev1", 00:08:03.518 "uuid": "4d43ba78-c080-4387-bc5d-b31a6a833d7c", 00:08:03.518 "strip_size_kb": 64, 00:08:03.518 "state": "configuring", 00:08:03.518 "raid_level": "raid0", 00:08:03.518 "superblock": true, 00:08:03.518 "num_base_bdevs": 3, 00:08:03.518 "num_base_bdevs_discovered": 1, 00:08:03.518 "num_base_bdevs_operational": 3, 00:08:03.518 "base_bdevs_list": [ 00:08:03.518 { 00:08:03.518 "name": "pt1", 00:08:03.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.518 "is_configured": true, 00:08:03.518 "data_offset": 2048, 00:08:03.518 "data_size": 63488 00:08:03.518 }, 00:08:03.518 { 00:08:03.518 "name": null, 00:08:03.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.518 "is_configured": false, 00:08:03.518 "data_offset": 0, 00:08:03.518 "data_size": 63488 00:08:03.518 }, 00:08:03.518 { 00:08:03.518 "name": null, 00:08:03.518 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:03.518 "is_configured": false, 00:08:03.518 "data_offset": 2048, 00:08:03.518 "data_size": 63488 00:08:03.518 } 00:08:03.518 ] 00:08:03.518 }' 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.518 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.086 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:04.086 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:04.086 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:04.086 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.086 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.086 [2024-11-26 13:20:52.393574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:04.086 [2024-11-26 13:20:52.393693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.086 [2024-11-26 13:20:52.393713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:04.086 [2024-11-26 13:20:52.393727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.086 [2024-11-26 13:20:52.394183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.086 [2024-11-26 13:20:52.394258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:04.086 [2024-11-26 13:20:52.394332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:04.086 [2024-11-26 13:20:52.394364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:04.086 pt2 00:08:04.086 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.086 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:04.086 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:04.086 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:04.086 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.086 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.086 [2024-11-26 13:20:52.405579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:04.086 [2024-11-26 13:20:52.405661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.086 [2024-11-26 13:20:52.405679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:04.086 [2024-11-26 13:20:52.405698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.086 [2024-11-26 13:20:52.406157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.086 [2024-11-26 13:20:52.406208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:04.086 [2024-11-26 13:20:52.406310] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:04.086 [2024-11-26 13:20:52.406341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:04.086 [2024-11-26 13:20:52.406496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:04.086 [2024-11-26 13:20:52.406525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:04.087 [2024-11-26 13:20:52.406841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:04.087 [2024-11-26 13:20:52.407022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:04.087 [2024-11-26 13:20:52.407042] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:04.087 [2024-11-26 13:20:52.407181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.087 pt3 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.087 "name": "raid_bdev1", 00:08:04.087 "uuid": "4d43ba78-c080-4387-bc5d-b31a6a833d7c", 00:08:04.087 "strip_size_kb": 64, 00:08:04.087 "state": "online", 00:08:04.087 "raid_level": "raid0", 00:08:04.087 "superblock": true, 00:08:04.087 "num_base_bdevs": 3, 00:08:04.087 "num_base_bdevs_discovered": 3, 00:08:04.087 "num_base_bdevs_operational": 3, 00:08:04.087 "base_bdevs_list": [ 00:08:04.087 { 00:08:04.087 "name": "pt1", 00:08:04.087 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:04.087 "is_configured": true, 00:08:04.087 "data_offset": 2048, 00:08:04.087 "data_size": 63488 00:08:04.087 }, 00:08:04.087 { 00:08:04.087 "name": "pt2", 00:08:04.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.087 "is_configured": true, 00:08:04.087 "data_offset": 2048, 00:08:04.087 "data_size": 63488 00:08:04.087 }, 00:08:04.087 { 00:08:04.087 "name": "pt3", 00:08:04.087 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:04.087 "is_configured": true, 00:08:04.087 "data_offset": 2048, 00:08:04.087 "data_size": 63488 00:08:04.087 } 00:08:04.087 ] 00:08:04.087 }' 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.087 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.346 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:04.346 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:04.346 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:04.346 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:04.346 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:04.346 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:04.346 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:04.346 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:04.346 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.346 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.346 [2024-11-26 13:20:52.885983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.346 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.606 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:04.606 "name": "raid_bdev1", 00:08:04.606 "aliases": [ 00:08:04.606 "4d43ba78-c080-4387-bc5d-b31a6a833d7c" 00:08:04.606 ], 00:08:04.606 "product_name": "Raid Volume", 00:08:04.606 "block_size": 512, 00:08:04.606 "num_blocks": 190464, 00:08:04.606 "uuid": "4d43ba78-c080-4387-bc5d-b31a6a833d7c", 00:08:04.606 "assigned_rate_limits": { 00:08:04.606 "rw_ios_per_sec": 0, 00:08:04.606 "rw_mbytes_per_sec": 0, 00:08:04.606 "r_mbytes_per_sec": 0, 00:08:04.606 "w_mbytes_per_sec": 0 00:08:04.606 }, 00:08:04.606 "claimed": false, 00:08:04.606 "zoned": false, 00:08:04.606 "supported_io_types": { 00:08:04.606 "read": true, 00:08:04.606 "write": true, 00:08:04.606 "unmap": true, 00:08:04.606 "flush": true, 00:08:04.606 "reset": true, 00:08:04.606 "nvme_admin": false, 00:08:04.606 "nvme_io": false, 00:08:04.606 "nvme_io_md": false, 00:08:04.606 "write_zeroes": true, 00:08:04.606 "zcopy": false, 00:08:04.606 "get_zone_info": false, 00:08:04.606 "zone_management": false, 00:08:04.606 "zone_append": false, 00:08:04.606 "compare": false, 00:08:04.606 "compare_and_write": false, 00:08:04.606 "abort": false, 00:08:04.606 "seek_hole": false, 00:08:04.606 "seek_data": false, 00:08:04.606 "copy": false, 00:08:04.606 "nvme_iov_md": false 00:08:04.606 }, 00:08:04.606 "memory_domains": [ 00:08:04.606 { 00:08:04.606 "dma_device_id": "system", 00:08:04.606 "dma_device_type": 1 00:08:04.606 }, 00:08:04.606 { 00:08:04.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.606 "dma_device_type": 2 00:08:04.606 }, 00:08:04.606 { 00:08:04.606 "dma_device_id": "system", 00:08:04.606 "dma_device_type": 1 00:08:04.606 }, 00:08:04.606 { 00:08:04.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.606 "dma_device_type": 2 00:08:04.606 }, 00:08:04.606 { 00:08:04.606 "dma_device_id": "system", 00:08:04.606 "dma_device_type": 1 00:08:04.606 }, 00:08:04.606 { 00:08:04.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.606 "dma_device_type": 2 00:08:04.606 } 00:08:04.606 ], 00:08:04.606 "driver_specific": { 00:08:04.606 "raid": { 00:08:04.606 "uuid": "4d43ba78-c080-4387-bc5d-b31a6a833d7c", 00:08:04.606 "strip_size_kb": 64, 00:08:04.606 "state": "online", 00:08:04.606 "raid_level": "raid0", 00:08:04.606 "superblock": true, 00:08:04.606 "num_base_bdevs": 3, 00:08:04.606 "num_base_bdevs_discovered": 3, 00:08:04.606 "num_base_bdevs_operational": 3, 00:08:04.606 "base_bdevs_list": [ 00:08:04.606 { 00:08:04.606 "name": "pt1", 00:08:04.606 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:04.606 "is_configured": true, 00:08:04.606 "data_offset": 2048, 00:08:04.606 "data_size": 63488 00:08:04.606 }, 00:08:04.606 { 00:08:04.606 "name": "pt2", 00:08:04.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.606 "is_configured": true, 00:08:04.606 "data_offset": 2048, 00:08:04.606 "data_size": 63488 00:08:04.606 }, 00:08:04.606 { 00:08:04.606 "name": "pt3", 00:08:04.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:04.606 "is_configured": true, 00:08:04.606 "data_offset": 2048, 00:08:04.606 "data_size": 63488 00:08:04.606 } 00:08:04.606 ] 00:08:04.606 } 00:08:04.606 } 00:08:04.606 }' 00:08:04.606 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.606 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:04.606 pt2 00:08:04.606 pt3' 00:08:04.606 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.866 [2024-11-26 13:20:53.194014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4d43ba78-c080-4387-bc5d-b31a6a833d7c '!=' 4d43ba78-c080-4387-bc5d-b31a6a833d7c ']' 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64585 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64585 ']' 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64585 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64585 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.866 killing process with pid 64585 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64585' 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64585 00:08:04.866 [2024-11-26 13:20:53.267392] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.866 [2024-11-26 13:20:53.267467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.866 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64585 00:08:04.866 [2024-11-26 13:20:53.267523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.866 [2024-11-26 13:20:53.267540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:05.125 [2024-11-26 13:20:53.475687] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.062 13:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:06.062 00:08:06.062 real 0m5.175s 00:08:06.062 user 0m7.910s 00:08:06.062 sys 0m0.770s 00:08:06.062 13:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.062 13:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.062 ************************************ 00:08:06.062 END TEST raid_superblock_test 00:08:06.062 ************************************ 00:08:06.062 13:20:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:06.062 13:20:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:06.062 13:20:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.062 13:20:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.062 ************************************ 00:08:06.062 START TEST raid_read_error_test 00:08:06.062 ************************************ 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mENO9fohvR 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64834 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64834 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 64834 ']' 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.062 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.062 [2024-11-26 13:20:54.469895] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:08:06.062 [2024-11-26 13:20:54.470195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64834 ] 00:08:06.321 [2024-11-26 13:20:54.635957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.321 [2024-11-26 13:20:54.742349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.580 [2024-11-26 13:20:54.909405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.580 [2024-11-26 13:20:54.909466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.148 BaseBdev1_malloc 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.148 true 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.148 [2024-11-26 13:20:55.464038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:07.148 [2024-11-26 13:20:55.464107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.148 [2024-11-26 13:20:55.464133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:07.148 [2024-11-26 13:20:55.464148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.148 [2024-11-26 13:20:55.466734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.148 [2024-11-26 13:20:55.466936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:07.148 BaseBdev1 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.148 BaseBdev2_malloc 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.148 true 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.148 [2024-11-26 13:20:55.522189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:07.148 [2024-11-26 13:20:55.522296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.148 [2024-11-26 13:20:55.522321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:07.148 [2024-11-26 13:20:55.522336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.148 [2024-11-26 13:20:55.524826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.148 [2024-11-26 13:20:55.524867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:07.148 BaseBdev2 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.148 BaseBdev3_malloc 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.148 true 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.148 [2024-11-26 13:20:55.585989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:07.148 [2024-11-26 13:20:55.586044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.148 [2024-11-26 13:20:55.586067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:07.148 [2024-11-26 13:20:55.586081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.148 [2024-11-26 13:20:55.588522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.148 [2024-11-26 13:20:55.588565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:07.148 BaseBdev3 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.148 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.148 [2024-11-26 13:20:55.594072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.148 [2024-11-26 13:20:55.596195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.148 [2024-11-26 13:20:55.596304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:07.148 [2024-11-26 13:20:55.596530] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:07.148 [2024-11-26 13:20:55.596548] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:07.148 [2024-11-26 13:20:55.596842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:07.149 [2024-11-26 13:20:55.597037] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:07.149 [2024-11-26 13:20:55.597072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:07.149 [2024-11-26 13:20:55.597227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.149 "name": "raid_bdev1", 00:08:07.149 "uuid": "5138c24c-7907-4ad1-9991-c53cf605b525", 00:08:07.149 "strip_size_kb": 64, 00:08:07.149 "state": "online", 00:08:07.149 "raid_level": "raid0", 00:08:07.149 "superblock": true, 00:08:07.149 "num_base_bdevs": 3, 00:08:07.149 "num_base_bdevs_discovered": 3, 00:08:07.149 "num_base_bdevs_operational": 3, 00:08:07.149 "base_bdevs_list": [ 00:08:07.149 { 00:08:07.149 "name": "BaseBdev1", 00:08:07.149 "uuid": "d14975ba-a280-5846-9332-0c6c49ceb59c", 00:08:07.149 "is_configured": true, 00:08:07.149 "data_offset": 2048, 00:08:07.149 "data_size": 63488 00:08:07.149 }, 00:08:07.149 { 00:08:07.149 "name": "BaseBdev2", 00:08:07.149 "uuid": "ad5254d9-56af-5672-93ca-be11ec1ad03e", 00:08:07.149 "is_configured": true, 00:08:07.149 "data_offset": 2048, 00:08:07.149 "data_size": 63488 00:08:07.149 }, 00:08:07.149 { 00:08:07.149 "name": "BaseBdev3", 00:08:07.149 "uuid": "b9551c94-7483-5a26-ad1a-4c76295018c2", 00:08:07.149 "is_configured": true, 00:08:07.149 "data_offset": 2048, 00:08:07.149 "data_size": 63488 00:08:07.149 } 00:08:07.149 ] 00:08:07.149 }' 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.149 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.717 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:07.717 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:07.717 [2024-11-26 13:20:56.195250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.653 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.653 "name": "raid_bdev1", 00:08:08.653 "uuid": "5138c24c-7907-4ad1-9991-c53cf605b525", 00:08:08.653 "strip_size_kb": 64, 00:08:08.653 "state": "online", 00:08:08.653 "raid_level": "raid0", 00:08:08.653 "superblock": true, 00:08:08.653 "num_base_bdevs": 3, 00:08:08.653 "num_base_bdevs_discovered": 3, 00:08:08.653 "num_base_bdevs_operational": 3, 00:08:08.653 "base_bdevs_list": [ 00:08:08.653 { 00:08:08.653 "name": "BaseBdev1", 00:08:08.653 "uuid": "d14975ba-a280-5846-9332-0c6c49ceb59c", 00:08:08.653 "is_configured": true, 00:08:08.653 "data_offset": 2048, 00:08:08.653 "data_size": 63488 00:08:08.653 }, 00:08:08.653 { 00:08:08.653 "name": "BaseBdev2", 00:08:08.653 "uuid": "ad5254d9-56af-5672-93ca-be11ec1ad03e", 00:08:08.653 "is_configured": true, 00:08:08.653 "data_offset": 2048, 00:08:08.654 "data_size": 63488 00:08:08.654 }, 00:08:08.654 { 00:08:08.654 "name": "BaseBdev3", 00:08:08.654 "uuid": "b9551c94-7483-5a26-ad1a-4c76295018c2", 00:08:08.654 "is_configured": true, 00:08:08.654 "data_offset": 2048, 00:08:08.654 "data_size": 63488 00:08:08.654 } 00:08:08.654 ] 00:08:08.654 }' 00:08:08.654 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.654 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.222 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:09.222 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.222 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.222 [2024-11-26 13:20:57.599200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.222 [2024-11-26 13:20:57.599427] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.222 [2024-11-26 13:20:57.602356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.222 [2024-11-26 13:20:57.602613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.222 [2024-11-26 13:20:57.602683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.222 [2024-11-26 13:20:57.602698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:09.222 { 00:08:09.222 "results": [ 00:08:09.222 { 00:08:09.222 "job": "raid_bdev1", 00:08:09.222 "core_mask": "0x1", 00:08:09.222 "workload": "randrw", 00:08:09.222 "percentage": 50, 00:08:09.222 "status": "finished", 00:08:09.222 "queue_depth": 1, 00:08:09.222 "io_size": 131072, 00:08:09.222 "runtime": 1.402232, 00:08:09.222 "iops": 13683.898242230958, 00:08:09.222 "mibps": 1710.4872802788698, 00:08:09.222 "io_failed": 1, 00:08:09.222 "io_timeout": 0, 00:08:09.222 "avg_latency_us": 102.14094249072622, 00:08:09.222 "min_latency_us": 33.28, 00:08:09.222 "max_latency_us": 1563.9272727272728 00:08:09.222 } 00:08:09.222 ], 00:08:09.222 "core_count": 1 00:08:09.222 } 00:08:09.222 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.222 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64834 00:08:09.222 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 64834 ']' 00:08:09.222 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 64834 00:08:09.222 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:09.222 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.222 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64834 00:08:09.222 killing process with pid 64834 00:08:09.222 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.222 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.222 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64834' 00:08:09.222 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 64834 00:08:09.222 [2024-11-26 13:20:57.638153] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.222 13:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 64834 00:08:09.481 [2024-11-26 13:20:57.801522] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.419 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:10.419 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:10.419 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mENO9fohvR 00:08:10.419 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:10.419 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:10.419 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.419 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:10.419 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:10.419 00:08:10.419 real 0m4.302s 00:08:10.419 user 0m5.386s 00:08:10.419 sys 0m0.534s 00:08:10.419 ************************************ 00:08:10.419 END TEST raid_read_error_test 00:08:10.419 ************************************ 00:08:10.419 13:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.419 13:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.419 13:20:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:10.419 13:20:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:10.419 13:20:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.419 13:20:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.419 ************************************ 00:08:10.419 START TEST raid_write_error_test 00:08:10.419 ************************************ 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SIQavcel07 00:08:10.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.419 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64978 00:08:10.420 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64978 00:08:10.420 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:10.420 13:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 64978 ']' 00:08:10.420 13:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.420 13:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.420 13:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.420 13:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.420 13:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.420 [2024-11-26 13:20:58.847893] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:08:10.420 [2024-11-26 13:20:58.848079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64978 ] 00:08:10.748 [2024-11-26 13:20:59.029109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.748 [2024-11-26 13:20:59.126198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.020 [2024-11-26 13:20:59.292288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.020 [2024-11-26 13:20:59.292350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.278 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.278 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.279 BaseBdev1_malloc 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.279 true 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.279 [2024-11-26 13:20:59.768542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:11.279 [2024-11-26 13:20:59.768623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.279 [2024-11-26 13:20:59.768649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:11.279 [2024-11-26 13:20:59.768665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.279 [2024-11-26 13:20:59.771110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.279 [2024-11-26 13:20:59.772110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:11.279 BaseBdev1 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.279 BaseBdev2_malloc 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.279 true 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.279 [2024-11-26 13:20:59.831204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:11.279 [2024-11-26 13:20:59.831461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.279 [2024-11-26 13:20:59.831495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:11.279 [2024-11-26 13:20:59.831513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.279 [2024-11-26 13:20:59.834370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.279 [2024-11-26 13:20:59.834415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:11.279 BaseBdev2 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.279 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.537 BaseBdev3_malloc 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.537 true 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.537 [2024-11-26 13:20:59.889846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:11.537 [2024-11-26 13:20:59.889899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.537 [2024-11-26 13:20:59.889922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:11.537 [2024-11-26 13:20:59.889937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.537 [2024-11-26 13:20:59.892524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.537 [2024-11-26 13:20:59.892568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:11.537 BaseBdev3 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.537 [2024-11-26 13:20:59.897923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.537 [2024-11-26 13:20:59.900120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.537 [2024-11-26 13:20:59.900214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.537 [2024-11-26 13:20:59.900690] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:11.537 [2024-11-26 13:20:59.900812] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:11.537 [2024-11-26 13:20:59.901116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:11.537 [2024-11-26 13:20:59.901365] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:11.537 [2024-11-26 13:20:59.901386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:11.537 [2024-11-26 13:20:59.901535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.537 "name": "raid_bdev1", 00:08:11.537 "uuid": "dfed9c9f-a4f7-40b2-a4a5-cc8787022e02", 00:08:11.537 "strip_size_kb": 64, 00:08:11.537 "state": "online", 00:08:11.537 "raid_level": "raid0", 00:08:11.537 "superblock": true, 00:08:11.537 "num_base_bdevs": 3, 00:08:11.537 "num_base_bdevs_discovered": 3, 00:08:11.537 "num_base_bdevs_operational": 3, 00:08:11.537 "base_bdevs_list": [ 00:08:11.537 { 00:08:11.537 "name": "BaseBdev1", 00:08:11.537 "uuid": "a9b4d247-42ba-5439-9f7c-6948189a97e3", 00:08:11.537 "is_configured": true, 00:08:11.537 "data_offset": 2048, 00:08:11.537 "data_size": 63488 00:08:11.537 }, 00:08:11.537 { 00:08:11.537 "name": "BaseBdev2", 00:08:11.537 "uuid": "1be8830f-3fa7-5b11-946c-e7ad92921cec", 00:08:11.537 "is_configured": true, 00:08:11.537 "data_offset": 2048, 00:08:11.537 "data_size": 63488 00:08:11.537 }, 00:08:11.537 { 00:08:11.537 "name": "BaseBdev3", 00:08:11.537 "uuid": "908879ee-21cb-5829-ae80-7b80675b80e2", 00:08:11.537 "is_configured": true, 00:08:11.537 "data_offset": 2048, 00:08:11.537 "data_size": 63488 00:08:11.537 } 00:08:11.537 ] 00:08:11.537 }' 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.537 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.103 13:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:12.103 13:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:12.103 [2024-11-26 13:21:00.527169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.037 "name": "raid_bdev1", 00:08:13.037 "uuid": "dfed9c9f-a4f7-40b2-a4a5-cc8787022e02", 00:08:13.037 "strip_size_kb": 64, 00:08:13.037 "state": "online", 00:08:13.037 "raid_level": "raid0", 00:08:13.037 "superblock": true, 00:08:13.037 "num_base_bdevs": 3, 00:08:13.037 "num_base_bdevs_discovered": 3, 00:08:13.037 "num_base_bdevs_operational": 3, 00:08:13.037 "base_bdevs_list": [ 00:08:13.037 { 00:08:13.037 "name": "BaseBdev1", 00:08:13.037 "uuid": "a9b4d247-42ba-5439-9f7c-6948189a97e3", 00:08:13.037 "is_configured": true, 00:08:13.037 "data_offset": 2048, 00:08:13.037 "data_size": 63488 00:08:13.037 }, 00:08:13.037 { 00:08:13.037 "name": "BaseBdev2", 00:08:13.037 "uuid": "1be8830f-3fa7-5b11-946c-e7ad92921cec", 00:08:13.037 "is_configured": true, 00:08:13.037 "data_offset": 2048, 00:08:13.037 "data_size": 63488 00:08:13.037 }, 00:08:13.037 { 00:08:13.037 "name": "BaseBdev3", 00:08:13.037 "uuid": "908879ee-21cb-5829-ae80-7b80675b80e2", 00:08:13.037 "is_configured": true, 00:08:13.037 "data_offset": 2048, 00:08:13.037 "data_size": 63488 00:08:13.037 } 00:08:13.037 ] 00:08:13.037 }' 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.037 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.603 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.603 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.603 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.603 [2024-11-26 13:21:01.923003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.603 [2024-11-26 13:21:01.923163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.603 [2024-11-26 13:21:01.926099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.603 [2024-11-26 13:21:01.926178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.603 [2024-11-26 13:21:01.926229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.603 [2024-11-26 13:21:01.926242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:13.603 { 00:08:13.603 "results": [ 00:08:13.603 { 00:08:13.603 "job": "raid_bdev1", 00:08:13.603 "core_mask": "0x1", 00:08:13.603 "workload": "randrw", 00:08:13.603 "percentage": 50, 00:08:13.603 "status": "finished", 00:08:13.603 "queue_depth": 1, 00:08:13.603 "io_size": 131072, 00:08:13.603 "runtime": 1.394027, 00:08:13.603 "iops": 13645.359810104108, 00:08:13.603 "mibps": 1705.6699762630135, 00:08:13.603 "io_failed": 1, 00:08:13.603 "io_timeout": 0, 00:08:13.603 "avg_latency_us": 102.5090721757872, 00:08:13.603 "min_latency_us": 33.97818181818182, 00:08:13.603 "max_latency_us": 1422.429090909091 00:08:13.603 } 00:08:13.603 ], 00:08:13.603 "core_count": 1 00:08:13.603 } 00:08:13.603 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.603 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64978 00:08:13.603 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 64978 ']' 00:08:13.603 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 64978 00:08:13.603 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:13.603 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.603 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64978 00:08:13.603 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.603 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.603 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64978' 00:08:13.603 killing process with pid 64978 00:08:13.603 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 64978 00:08:13.603 [2024-11-26 13:21:01.962365] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.604 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 64978 00:08:13.604 [2024-11-26 13:21:02.120081] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:14.539 13:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:14.539 13:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SIQavcel07 00:08:14.539 13:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:14.539 13:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:14.539 13:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:14.539 13:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:14.539 13:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:14.539 ************************************ 00:08:14.539 END TEST raid_write_error_test 00:08:14.539 ************************************ 00:08:14.539 13:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:14.539 00:08:14.539 real 0m4.260s 00:08:14.539 user 0m5.342s 00:08:14.539 sys 0m0.515s 00:08:14.539 13:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.539 13:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.539 13:21:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:14.539 13:21:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:14.539 13:21:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:14.539 13:21:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.539 13:21:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:14.539 ************************************ 00:08:14.539 START TEST raid_state_function_test 00:08:14.539 ************************************ 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:14.539 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:14.540 Process raid pid: 65122 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65122 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65122' 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65122 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65122 ']' 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.540 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.798 [2024-11-26 13:21:03.153662] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:08:14.798 [2024-11-26 13:21:03.154055] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.798 [2024-11-26 13:21:03.321599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.057 [2024-11-26 13:21:03.426883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.057 [2024-11-26 13:21:03.596518] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.057 [2024-11-26 13:21:03.596555] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.626 [2024-11-26 13:21:04.120823] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:15.626 [2024-11-26 13:21:04.120878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:15.626 [2024-11-26 13:21:04.120893] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.626 [2024-11-26 13:21:04.120906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.626 [2024-11-26 13:21:04.120914] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:15.626 [2024-11-26 13:21:04.120925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.626 "name": "Existed_Raid", 00:08:15.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.626 "strip_size_kb": 64, 00:08:15.626 "state": "configuring", 00:08:15.626 "raid_level": "concat", 00:08:15.626 "superblock": false, 00:08:15.626 "num_base_bdevs": 3, 00:08:15.626 "num_base_bdevs_discovered": 0, 00:08:15.626 "num_base_bdevs_operational": 3, 00:08:15.626 "base_bdevs_list": [ 00:08:15.626 { 00:08:15.626 "name": "BaseBdev1", 00:08:15.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.626 "is_configured": false, 00:08:15.626 "data_offset": 0, 00:08:15.626 "data_size": 0 00:08:15.626 }, 00:08:15.626 { 00:08:15.626 "name": "BaseBdev2", 00:08:15.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.626 "is_configured": false, 00:08:15.626 "data_offset": 0, 00:08:15.626 "data_size": 0 00:08:15.626 }, 00:08:15.626 { 00:08:15.626 "name": "BaseBdev3", 00:08:15.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.626 "is_configured": false, 00:08:15.626 "data_offset": 0, 00:08:15.626 "data_size": 0 00:08:15.626 } 00:08:15.626 ] 00:08:15.626 }' 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.626 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.195 [2024-11-26 13:21:04.628864] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.195 [2024-11-26 13:21:04.629014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.195 [2024-11-26 13:21:04.636869] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.195 [2024-11-26 13:21:04.637044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.195 [2024-11-26 13:21:04.637149] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.195 [2024-11-26 13:21:04.637203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.195 [2024-11-26 13:21:04.637217] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:16.195 [2024-11-26 13:21:04.637261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.195 [2024-11-26 13:21:04.678947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.195 BaseBdev1 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.195 [ 00:08:16.195 { 00:08:16.195 "name": "BaseBdev1", 00:08:16.195 "aliases": [ 00:08:16.195 "60f88313-0118-43f3-8615-a40e0ee71dc0" 00:08:16.195 ], 00:08:16.195 "product_name": "Malloc disk", 00:08:16.195 "block_size": 512, 00:08:16.195 "num_blocks": 65536, 00:08:16.195 "uuid": "60f88313-0118-43f3-8615-a40e0ee71dc0", 00:08:16.195 "assigned_rate_limits": { 00:08:16.195 "rw_ios_per_sec": 0, 00:08:16.195 "rw_mbytes_per_sec": 0, 00:08:16.195 "r_mbytes_per_sec": 0, 00:08:16.195 "w_mbytes_per_sec": 0 00:08:16.195 }, 00:08:16.195 "claimed": true, 00:08:16.195 "claim_type": "exclusive_write", 00:08:16.195 "zoned": false, 00:08:16.195 "supported_io_types": { 00:08:16.195 "read": true, 00:08:16.195 "write": true, 00:08:16.195 "unmap": true, 00:08:16.195 "flush": true, 00:08:16.195 "reset": true, 00:08:16.195 "nvme_admin": false, 00:08:16.195 "nvme_io": false, 00:08:16.195 "nvme_io_md": false, 00:08:16.195 "write_zeroes": true, 00:08:16.195 "zcopy": true, 00:08:16.195 "get_zone_info": false, 00:08:16.195 "zone_management": false, 00:08:16.195 "zone_append": false, 00:08:16.195 "compare": false, 00:08:16.195 "compare_and_write": false, 00:08:16.195 "abort": true, 00:08:16.195 "seek_hole": false, 00:08:16.195 "seek_data": false, 00:08:16.195 "copy": true, 00:08:16.195 "nvme_iov_md": false 00:08:16.195 }, 00:08:16.195 "memory_domains": [ 00:08:16.195 { 00:08:16.195 "dma_device_id": "system", 00:08:16.195 "dma_device_type": 1 00:08:16.195 }, 00:08:16.195 { 00:08:16.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.195 "dma_device_type": 2 00:08:16.195 } 00:08:16.195 ], 00:08:16.195 "driver_specific": {} 00:08:16.195 } 00:08:16.195 ] 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.195 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.455 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.455 "name": "Existed_Raid", 00:08:16.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.455 "strip_size_kb": 64, 00:08:16.455 "state": "configuring", 00:08:16.455 "raid_level": "concat", 00:08:16.455 "superblock": false, 00:08:16.455 "num_base_bdevs": 3, 00:08:16.455 "num_base_bdevs_discovered": 1, 00:08:16.455 "num_base_bdevs_operational": 3, 00:08:16.455 "base_bdevs_list": [ 00:08:16.455 { 00:08:16.455 "name": "BaseBdev1", 00:08:16.455 "uuid": "60f88313-0118-43f3-8615-a40e0ee71dc0", 00:08:16.455 "is_configured": true, 00:08:16.455 "data_offset": 0, 00:08:16.455 "data_size": 65536 00:08:16.455 }, 00:08:16.455 { 00:08:16.455 "name": "BaseBdev2", 00:08:16.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.455 "is_configured": false, 00:08:16.455 "data_offset": 0, 00:08:16.455 "data_size": 0 00:08:16.455 }, 00:08:16.455 { 00:08:16.455 "name": "BaseBdev3", 00:08:16.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.455 "is_configured": false, 00:08:16.455 "data_offset": 0, 00:08:16.455 "data_size": 0 00:08:16.455 } 00:08:16.455 ] 00:08:16.455 }' 00:08:16.455 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.455 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.715 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.715 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.715 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.715 [2024-11-26 13:21:05.211075] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.715 [2024-11-26 13:21:05.211239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:16.715 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.715 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:16.715 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.715 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.715 [2024-11-26 13:21:05.219128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.715 [2024-11-26 13:21:05.221185] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.715 [2024-11-26 13:21:05.221243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.715 [2024-11-26 13:21:05.221258] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:16.715 [2024-11-26 13:21:05.221271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:16.715 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.715 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:16.715 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.715 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:16.715 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.715 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.715 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.715 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.716 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.716 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.716 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.716 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.716 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.716 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.716 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.716 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.716 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.716 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.975 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.975 "name": "Existed_Raid", 00:08:16.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.975 "strip_size_kb": 64, 00:08:16.975 "state": "configuring", 00:08:16.975 "raid_level": "concat", 00:08:16.975 "superblock": false, 00:08:16.975 "num_base_bdevs": 3, 00:08:16.975 "num_base_bdevs_discovered": 1, 00:08:16.975 "num_base_bdevs_operational": 3, 00:08:16.975 "base_bdevs_list": [ 00:08:16.975 { 00:08:16.975 "name": "BaseBdev1", 00:08:16.975 "uuid": "60f88313-0118-43f3-8615-a40e0ee71dc0", 00:08:16.975 "is_configured": true, 00:08:16.975 "data_offset": 0, 00:08:16.975 "data_size": 65536 00:08:16.975 }, 00:08:16.975 { 00:08:16.975 "name": "BaseBdev2", 00:08:16.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.975 "is_configured": false, 00:08:16.975 "data_offset": 0, 00:08:16.975 "data_size": 0 00:08:16.975 }, 00:08:16.975 { 00:08:16.975 "name": "BaseBdev3", 00:08:16.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.975 "is_configured": false, 00:08:16.975 "data_offset": 0, 00:08:16.975 "data_size": 0 00:08:16.975 } 00:08:16.975 ] 00:08:16.975 }' 00:08:16.975 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.975 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.235 [2024-11-26 13:21:05.771206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.235 BaseBdev2 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.235 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.235 [ 00:08:17.235 { 00:08:17.235 "name": "BaseBdev2", 00:08:17.235 "aliases": [ 00:08:17.235 "e4db3d82-f47b-4cbf-b7ab-d8c4fb6966ea" 00:08:17.235 ], 00:08:17.235 "product_name": "Malloc disk", 00:08:17.235 "block_size": 512, 00:08:17.235 "num_blocks": 65536, 00:08:17.235 "uuid": "e4db3d82-f47b-4cbf-b7ab-d8c4fb6966ea", 00:08:17.235 "assigned_rate_limits": { 00:08:17.235 "rw_ios_per_sec": 0, 00:08:17.235 "rw_mbytes_per_sec": 0, 00:08:17.235 "r_mbytes_per_sec": 0, 00:08:17.235 "w_mbytes_per_sec": 0 00:08:17.235 }, 00:08:17.235 "claimed": true, 00:08:17.235 "claim_type": "exclusive_write", 00:08:17.235 "zoned": false, 00:08:17.235 "supported_io_types": { 00:08:17.235 "read": true, 00:08:17.235 "write": true, 00:08:17.235 "unmap": true, 00:08:17.235 "flush": true, 00:08:17.235 "reset": true, 00:08:17.235 "nvme_admin": false, 00:08:17.495 "nvme_io": false, 00:08:17.495 "nvme_io_md": false, 00:08:17.495 "write_zeroes": true, 00:08:17.495 "zcopy": true, 00:08:17.495 "get_zone_info": false, 00:08:17.495 "zone_management": false, 00:08:17.495 "zone_append": false, 00:08:17.495 "compare": false, 00:08:17.495 "compare_and_write": false, 00:08:17.495 "abort": true, 00:08:17.495 "seek_hole": false, 00:08:17.495 "seek_data": false, 00:08:17.495 "copy": true, 00:08:17.495 "nvme_iov_md": false 00:08:17.495 }, 00:08:17.495 "memory_domains": [ 00:08:17.495 { 00:08:17.495 "dma_device_id": "system", 00:08:17.495 "dma_device_type": 1 00:08:17.495 }, 00:08:17.495 { 00:08:17.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.495 "dma_device_type": 2 00:08:17.495 } 00:08:17.495 ], 00:08:17.495 "driver_specific": {} 00:08:17.495 } 00:08:17.495 ] 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.495 "name": "Existed_Raid", 00:08:17.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.495 "strip_size_kb": 64, 00:08:17.495 "state": "configuring", 00:08:17.495 "raid_level": "concat", 00:08:17.495 "superblock": false, 00:08:17.495 "num_base_bdevs": 3, 00:08:17.495 "num_base_bdevs_discovered": 2, 00:08:17.495 "num_base_bdevs_operational": 3, 00:08:17.495 "base_bdevs_list": [ 00:08:17.495 { 00:08:17.495 "name": "BaseBdev1", 00:08:17.495 "uuid": "60f88313-0118-43f3-8615-a40e0ee71dc0", 00:08:17.495 "is_configured": true, 00:08:17.495 "data_offset": 0, 00:08:17.495 "data_size": 65536 00:08:17.495 }, 00:08:17.495 { 00:08:17.495 "name": "BaseBdev2", 00:08:17.495 "uuid": "e4db3d82-f47b-4cbf-b7ab-d8c4fb6966ea", 00:08:17.495 "is_configured": true, 00:08:17.495 "data_offset": 0, 00:08:17.495 "data_size": 65536 00:08:17.495 }, 00:08:17.495 { 00:08:17.495 "name": "BaseBdev3", 00:08:17.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.495 "is_configured": false, 00:08:17.495 "data_offset": 0, 00:08:17.495 "data_size": 0 00:08:17.495 } 00:08:17.495 ] 00:08:17.495 }' 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.495 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.754 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:17.754 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.754 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.013 [2024-11-26 13:21:06.345443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:18.013 [2024-11-26 13:21:06.345485] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:18.013 [2024-11-26 13:21:06.345501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:18.013 [2024-11-26 13:21:06.345776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:18.013 [2024-11-26 13:21:06.345964] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:18.013 [2024-11-26 13:21:06.345978] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:18.013 [2024-11-26 13:21:06.346253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.013 BaseBdev3 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.013 [ 00:08:18.013 { 00:08:18.013 "name": "BaseBdev3", 00:08:18.013 "aliases": [ 00:08:18.013 "37eaa5d9-d1cb-4ce5-a3d6-927516cb7bcc" 00:08:18.013 ], 00:08:18.013 "product_name": "Malloc disk", 00:08:18.013 "block_size": 512, 00:08:18.013 "num_blocks": 65536, 00:08:18.013 "uuid": "37eaa5d9-d1cb-4ce5-a3d6-927516cb7bcc", 00:08:18.013 "assigned_rate_limits": { 00:08:18.013 "rw_ios_per_sec": 0, 00:08:18.013 "rw_mbytes_per_sec": 0, 00:08:18.013 "r_mbytes_per_sec": 0, 00:08:18.013 "w_mbytes_per_sec": 0 00:08:18.013 }, 00:08:18.013 "claimed": true, 00:08:18.013 "claim_type": "exclusive_write", 00:08:18.013 "zoned": false, 00:08:18.013 "supported_io_types": { 00:08:18.013 "read": true, 00:08:18.013 "write": true, 00:08:18.013 "unmap": true, 00:08:18.013 "flush": true, 00:08:18.013 "reset": true, 00:08:18.013 "nvme_admin": false, 00:08:18.013 "nvme_io": false, 00:08:18.013 "nvme_io_md": false, 00:08:18.013 "write_zeroes": true, 00:08:18.013 "zcopy": true, 00:08:18.013 "get_zone_info": false, 00:08:18.013 "zone_management": false, 00:08:18.013 "zone_append": false, 00:08:18.013 "compare": false, 00:08:18.013 "compare_and_write": false, 00:08:18.013 "abort": true, 00:08:18.013 "seek_hole": false, 00:08:18.013 "seek_data": false, 00:08:18.013 "copy": true, 00:08:18.013 "nvme_iov_md": false 00:08:18.013 }, 00:08:18.013 "memory_domains": [ 00:08:18.013 { 00:08:18.013 "dma_device_id": "system", 00:08:18.013 "dma_device_type": 1 00:08:18.013 }, 00:08:18.013 { 00:08:18.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.013 "dma_device_type": 2 00:08:18.013 } 00:08:18.013 ], 00:08:18.013 "driver_specific": {} 00:08:18.013 } 00:08:18.013 ] 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.013 "name": "Existed_Raid", 00:08:18.013 "uuid": "9f4789f8-f707-4f94-b2fb-e143fa34a417", 00:08:18.013 "strip_size_kb": 64, 00:08:18.013 "state": "online", 00:08:18.013 "raid_level": "concat", 00:08:18.013 "superblock": false, 00:08:18.013 "num_base_bdevs": 3, 00:08:18.013 "num_base_bdevs_discovered": 3, 00:08:18.013 "num_base_bdevs_operational": 3, 00:08:18.013 "base_bdevs_list": [ 00:08:18.013 { 00:08:18.013 "name": "BaseBdev1", 00:08:18.013 "uuid": "60f88313-0118-43f3-8615-a40e0ee71dc0", 00:08:18.013 "is_configured": true, 00:08:18.013 "data_offset": 0, 00:08:18.013 "data_size": 65536 00:08:18.013 }, 00:08:18.013 { 00:08:18.013 "name": "BaseBdev2", 00:08:18.013 "uuid": "e4db3d82-f47b-4cbf-b7ab-d8c4fb6966ea", 00:08:18.013 "is_configured": true, 00:08:18.013 "data_offset": 0, 00:08:18.013 "data_size": 65536 00:08:18.013 }, 00:08:18.013 { 00:08:18.013 "name": "BaseBdev3", 00:08:18.013 "uuid": "37eaa5d9-d1cb-4ce5-a3d6-927516cb7bcc", 00:08:18.013 "is_configured": true, 00:08:18.013 "data_offset": 0, 00:08:18.013 "data_size": 65536 00:08:18.013 } 00:08:18.013 ] 00:08:18.013 }' 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.013 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.582 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:18.582 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:18.582 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.582 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.582 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.582 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.582 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:18.582 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.582 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.582 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.582 [2024-11-26 13:21:06.889885] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.582 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.582 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:18.582 "name": "Existed_Raid", 00:08:18.582 "aliases": [ 00:08:18.582 "9f4789f8-f707-4f94-b2fb-e143fa34a417" 00:08:18.582 ], 00:08:18.582 "product_name": "Raid Volume", 00:08:18.582 "block_size": 512, 00:08:18.582 "num_blocks": 196608, 00:08:18.582 "uuid": "9f4789f8-f707-4f94-b2fb-e143fa34a417", 00:08:18.582 "assigned_rate_limits": { 00:08:18.582 "rw_ios_per_sec": 0, 00:08:18.582 "rw_mbytes_per_sec": 0, 00:08:18.582 "r_mbytes_per_sec": 0, 00:08:18.582 "w_mbytes_per_sec": 0 00:08:18.582 }, 00:08:18.582 "claimed": false, 00:08:18.582 "zoned": false, 00:08:18.582 "supported_io_types": { 00:08:18.582 "read": true, 00:08:18.582 "write": true, 00:08:18.582 "unmap": true, 00:08:18.582 "flush": true, 00:08:18.582 "reset": true, 00:08:18.582 "nvme_admin": false, 00:08:18.582 "nvme_io": false, 00:08:18.582 "nvme_io_md": false, 00:08:18.582 "write_zeroes": true, 00:08:18.582 "zcopy": false, 00:08:18.582 "get_zone_info": false, 00:08:18.582 "zone_management": false, 00:08:18.582 "zone_append": false, 00:08:18.582 "compare": false, 00:08:18.582 "compare_and_write": false, 00:08:18.582 "abort": false, 00:08:18.582 "seek_hole": false, 00:08:18.582 "seek_data": false, 00:08:18.582 "copy": false, 00:08:18.582 "nvme_iov_md": false 00:08:18.582 }, 00:08:18.582 "memory_domains": [ 00:08:18.582 { 00:08:18.582 "dma_device_id": "system", 00:08:18.582 "dma_device_type": 1 00:08:18.582 }, 00:08:18.582 { 00:08:18.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.582 "dma_device_type": 2 00:08:18.582 }, 00:08:18.582 { 00:08:18.582 "dma_device_id": "system", 00:08:18.582 "dma_device_type": 1 00:08:18.582 }, 00:08:18.582 { 00:08:18.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.582 "dma_device_type": 2 00:08:18.582 }, 00:08:18.582 { 00:08:18.582 "dma_device_id": "system", 00:08:18.582 "dma_device_type": 1 00:08:18.582 }, 00:08:18.582 { 00:08:18.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.582 "dma_device_type": 2 00:08:18.582 } 00:08:18.582 ], 00:08:18.582 "driver_specific": { 00:08:18.582 "raid": { 00:08:18.582 "uuid": "9f4789f8-f707-4f94-b2fb-e143fa34a417", 00:08:18.582 "strip_size_kb": 64, 00:08:18.582 "state": "online", 00:08:18.582 "raid_level": "concat", 00:08:18.582 "superblock": false, 00:08:18.582 "num_base_bdevs": 3, 00:08:18.582 "num_base_bdevs_discovered": 3, 00:08:18.582 "num_base_bdevs_operational": 3, 00:08:18.582 "base_bdevs_list": [ 00:08:18.582 { 00:08:18.582 "name": "BaseBdev1", 00:08:18.582 "uuid": "60f88313-0118-43f3-8615-a40e0ee71dc0", 00:08:18.582 "is_configured": true, 00:08:18.582 "data_offset": 0, 00:08:18.582 "data_size": 65536 00:08:18.582 }, 00:08:18.582 { 00:08:18.582 "name": "BaseBdev2", 00:08:18.582 "uuid": "e4db3d82-f47b-4cbf-b7ab-d8c4fb6966ea", 00:08:18.582 "is_configured": true, 00:08:18.582 "data_offset": 0, 00:08:18.582 "data_size": 65536 00:08:18.582 }, 00:08:18.582 { 00:08:18.582 "name": "BaseBdev3", 00:08:18.582 "uuid": "37eaa5d9-d1cb-4ce5-a3d6-927516cb7bcc", 00:08:18.582 "is_configured": true, 00:08:18.582 "data_offset": 0, 00:08:18.582 "data_size": 65536 00:08:18.582 } 00:08:18.582 ] 00:08:18.582 } 00:08:18.582 } 00:08:18.582 }' 00:08:18.582 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:18.582 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:18.582 BaseBdev2 00:08:18.582 BaseBdev3' 00:08:18.582 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.582 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:18.582 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.582 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:18.582 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.582 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.582 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.582 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.582 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.582 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.582 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.582 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:18.582 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.582 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.582 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.582 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.841 [2024-11-26 13:21:07.213761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:18.841 [2024-11-26 13:21:07.213918] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.841 [2024-11-26 13:21:07.213991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.841 "name": "Existed_Raid", 00:08:18.841 "uuid": "9f4789f8-f707-4f94-b2fb-e143fa34a417", 00:08:18.841 "strip_size_kb": 64, 00:08:18.841 "state": "offline", 00:08:18.841 "raid_level": "concat", 00:08:18.841 "superblock": false, 00:08:18.841 "num_base_bdevs": 3, 00:08:18.841 "num_base_bdevs_discovered": 2, 00:08:18.841 "num_base_bdevs_operational": 2, 00:08:18.841 "base_bdevs_list": [ 00:08:18.841 { 00:08:18.841 "name": null, 00:08:18.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.841 "is_configured": false, 00:08:18.841 "data_offset": 0, 00:08:18.841 "data_size": 65536 00:08:18.841 }, 00:08:18.841 { 00:08:18.841 "name": "BaseBdev2", 00:08:18.841 "uuid": "e4db3d82-f47b-4cbf-b7ab-d8c4fb6966ea", 00:08:18.841 "is_configured": true, 00:08:18.841 "data_offset": 0, 00:08:18.841 "data_size": 65536 00:08:18.841 }, 00:08:18.841 { 00:08:18.841 "name": "BaseBdev3", 00:08:18.841 "uuid": "37eaa5d9-d1cb-4ce5-a3d6-927516cb7bcc", 00:08:18.841 "is_configured": true, 00:08:18.841 "data_offset": 0, 00:08:18.841 "data_size": 65536 00:08:18.841 } 00:08:18.841 ] 00:08:18.841 }' 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.841 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.409 [2024-11-26 13:21:07.847052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.409 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.409 [2024-11-26 13:21:07.970989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:19.409 [2024-11-26 13:21:07.971044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.669 BaseBdev2 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.669 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.670 [ 00:08:19.670 { 00:08:19.670 "name": "BaseBdev2", 00:08:19.670 "aliases": [ 00:08:19.670 "27cb9e15-00b0-4322-bf96-855c6197c7da" 00:08:19.670 ], 00:08:19.670 "product_name": "Malloc disk", 00:08:19.670 "block_size": 512, 00:08:19.670 "num_blocks": 65536, 00:08:19.670 "uuid": "27cb9e15-00b0-4322-bf96-855c6197c7da", 00:08:19.670 "assigned_rate_limits": { 00:08:19.670 "rw_ios_per_sec": 0, 00:08:19.670 "rw_mbytes_per_sec": 0, 00:08:19.670 "r_mbytes_per_sec": 0, 00:08:19.670 "w_mbytes_per_sec": 0 00:08:19.670 }, 00:08:19.670 "claimed": false, 00:08:19.670 "zoned": false, 00:08:19.670 "supported_io_types": { 00:08:19.670 "read": true, 00:08:19.670 "write": true, 00:08:19.670 "unmap": true, 00:08:19.670 "flush": true, 00:08:19.670 "reset": true, 00:08:19.670 "nvme_admin": false, 00:08:19.670 "nvme_io": false, 00:08:19.670 "nvme_io_md": false, 00:08:19.670 "write_zeroes": true, 00:08:19.670 "zcopy": true, 00:08:19.670 "get_zone_info": false, 00:08:19.670 "zone_management": false, 00:08:19.670 "zone_append": false, 00:08:19.670 "compare": false, 00:08:19.670 "compare_and_write": false, 00:08:19.670 "abort": true, 00:08:19.670 "seek_hole": false, 00:08:19.670 "seek_data": false, 00:08:19.670 "copy": true, 00:08:19.670 "nvme_iov_md": false 00:08:19.670 }, 00:08:19.670 "memory_domains": [ 00:08:19.670 { 00:08:19.670 "dma_device_id": "system", 00:08:19.670 "dma_device_type": 1 00:08:19.670 }, 00:08:19.670 { 00:08:19.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.670 "dma_device_type": 2 00:08:19.670 } 00:08:19.670 ], 00:08:19.670 "driver_specific": {} 00:08:19.670 } 00:08:19.670 ] 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.670 BaseBdev3 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.670 [ 00:08:19.670 { 00:08:19.670 "name": "BaseBdev3", 00:08:19.670 "aliases": [ 00:08:19.670 "c36f7e54-f9fd-469d-b36a-1a7efa28373f" 00:08:19.670 ], 00:08:19.670 "product_name": "Malloc disk", 00:08:19.670 "block_size": 512, 00:08:19.670 "num_blocks": 65536, 00:08:19.670 "uuid": "c36f7e54-f9fd-469d-b36a-1a7efa28373f", 00:08:19.670 "assigned_rate_limits": { 00:08:19.670 "rw_ios_per_sec": 0, 00:08:19.670 "rw_mbytes_per_sec": 0, 00:08:19.670 "r_mbytes_per_sec": 0, 00:08:19.670 "w_mbytes_per_sec": 0 00:08:19.670 }, 00:08:19.670 "claimed": false, 00:08:19.670 "zoned": false, 00:08:19.670 "supported_io_types": { 00:08:19.670 "read": true, 00:08:19.670 "write": true, 00:08:19.670 "unmap": true, 00:08:19.670 "flush": true, 00:08:19.670 "reset": true, 00:08:19.670 "nvme_admin": false, 00:08:19.670 "nvme_io": false, 00:08:19.670 "nvme_io_md": false, 00:08:19.670 "write_zeroes": true, 00:08:19.670 "zcopy": true, 00:08:19.670 "get_zone_info": false, 00:08:19.670 "zone_management": false, 00:08:19.670 "zone_append": false, 00:08:19.670 "compare": false, 00:08:19.670 "compare_and_write": false, 00:08:19.670 "abort": true, 00:08:19.670 "seek_hole": false, 00:08:19.670 "seek_data": false, 00:08:19.670 "copy": true, 00:08:19.670 "nvme_iov_md": false 00:08:19.670 }, 00:08:19.670 "memory_domains": [ 00:08:19.670 { 00:08:19.670 "dma_device_id": "system", 00:08:19.670 "dma_device_type": 1 00:08:19.670 }, 00:08:19.670 { 00:08:19.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.670 "dma_device_type": 2 00:08:19.670 } 00:08:19.670 ], 00:08:19.670 "driver_specific": {} 00:08:19.670 } 00:08:19.670 ] 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.670 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.670 [2024-11-26 13:21:08.231833] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:19.670 [2024-11-26 13:21:08.231892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:19.670 [2024-11-26 13:21:08.231936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.929 [2024-11-26 13:21:08.233970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.929 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.929 "name": "Existed_Raid", 00:08:19.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.930 "strip_size_kb": 64, 00:08:19.930 "state": "configuring", 00:08:19.930 "raid_level": "concat", 00:08:19.930 "superblock": false, 00:08:19.930 "num_base_bdevs": 3, 00:08:19.930 "num_base_bdevs_discovered": 2, 00:08:19.930 "num_base_bdevs_operational": 3, 00:08:19.930 "base_bdevs_list": [ 00:08:19.930 { 00:08:19.930 "name": "BaseBdev1", 00:08:19.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.930 "is_configured": false, 00:08:19.930 "data_offset": 0, 00:08:19.930 "data_size": 0 00:08:19.930 }, 00:08:19.930 { 00:08:19.930 "name": "BaseBdev2", 00:08:19.930 "uuid": "27cb9e15-00b0-4322-bf96-855c6197c7da", 00:08:19.930 "is_configured": true, 00:08:19.930 "data_offset": 0, 00:08:19.930 "data_size": 65536 00:08:19.930 }, 00:08:19.930 { 00:08:19.930 "name": "BaseBdev3", 00:08:19.930 "uuid": "c36f7e54-f9fd-469d-b36a-1a7efa28373f", 00:08:19.930 "is_configured": true, 00:08:19.930 "data_offset": 0, 00:08:19.930 "data_size": 65536 00:08:19.930 } 00:08:19.930 ] 00:08:19.930 }' 00:08:19.930 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.930 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.188 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:20.188 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.188 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.188 [2024-11-26 13:21:08.715913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:20.188 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.188 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:20.188 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.188 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.188 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.188 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.188 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.188 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.189 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.189 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.189 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.189 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.189 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.189 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.189 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.189 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.448 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.448 "name": "Existed_Raid", 00:08:20.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.448 "strip_size_kb": 64, 00:08:20.448 "state": "configuring", 00:08:20.448 "raid_level": "concat", 00:08:20.448 "superblock": false, 00:08:20.448 "num_base_bdevs": 3, 00:08:20.448 "num_base_bdevs_discovered": 1, 00:08:20.448 "num_base_bdevs_operational": 3, 00:08:20.448 "base_bdevs_list": [ 00:08:20.448 { 00:08:20.448 "name": "BaseBdev1", 00:08:20.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.448 "is_configured": false, 00:08:20.448 "data_offset": 0, 00:08:20.448 "data_size": 0 00:08:20.448 }, 00:08:20.448 { 00:08:20.448 "name": null, 00:08:20.448 "uuid": "27cb9e15-00b0-4322-bf96-855c6197c7da", 00:08:20.448 "is_configured": false, 00:08:20.448 "data_offset": 0, 00:08:20.448 "data_size": 65536 00:08:20.448 }, 00:08:20.448 { 00:08:20.448 "name": "BaseBdev3", 00:08:20.448 "uuid": "c36f7e54-f9fd-469d-b36a-1a7efa28373f", 00:08:20.448 "is_configured": true, 00:08:20.448 "data_offset": 0, 00:08:20.448 "data_size": 65536 00:08:20.448 } 00:08:20.448 ] 00:08:20.448 }' 00:08:20.448 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.448 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.706 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.706 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:20.706 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.706 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.706 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.964 [2024-11-26 13:21:09.307493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:20.964 BaseBdev1 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.964 [ 00:08:20.964 { 00:08:20.964 "name": "BaseBdev1", 00:08:20.964 "aliases": [ 00:08:20.964 "d53e0c31-69e5-4c1b-9ef5-036c7a73f3cf" 00:08:20.964 ], 00:08:20.964 "product_name": "Malloc disk", 00:08:20.964 "block_size": 512, 00:08:20.964 "num_blocks": 65536, 00:08:20.964 "uuid": "d53e0c31-69e5-4c1b-9ef5-036c7a73f3cf", 00:08:20.964 "assigned_rate_limits": { 00:08:20.964 "rw_ios_per_sec": 0, 00:08:20.964 "rw_mbytes_per_sec": 0, 00:08:20.964 "r_mbytes_per_sec": 0, 00:08:20.964 "w_mbytes_per_sec": 0 00:08:20.964 }, 00:08:20.964 "claimed": true, 00:08:20.964 "claim_type": "exclusive_write", 00:08:20.964 "zoned": false, 00:08:20.964 "supported_io_types": { 00:08:20.964 "read": true, 00:08:20.964 "write": true, 00:08:20.964 "unmap": true, 00:08:20.964 "flush": true, 00:08:20.964 "reset": true, 00:08:20.964 "nvme_admin": false, 00:08:20.964 "nvme_io": false, 00:08:20.964 "nvme_io_md": false, 00:08:20.964 "write_zeroes": true, 00:08:20.964 "zcopy": true, 00:08:20.964 "get_zone_info": false, 00:08:20.964 "zone_management": false, 00:08:20.964 "zone_append": false, 00:08:20.964 "compare": false, 00:08:20.964 "compare_and_write": false, 00:08:20.964 "abort": true, 00:08:20.964 "seek_hole": false, 00:08:20.964 "seek_data": false, 00:08:20.964 "copy": true, 00:08:20.964 "nvme_iov_md": false 00:08:20.964 }, 00:08:20.964 "memory_domains": [ 00:08:20.964 { 00:08:20.964 "dma_device_id": "system", 00:08:20.964 "dma_device_type": 1 00:08:20.964 }, 00:08:20.964 { 00:08:20.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.964 "dma_device_type": 2 00:08:20.964 } 00:08:20.964 ], 00:08:20.964 "driver_specific": {} 00:08:20.964 } 00:08:20.964 ] 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.964 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.964 "name": "Existed_Raid", 00:08:20.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.964 "strip_size_kb": 64, 00:08:20.964 "state": "configuring", 00:08:20.964 "raid_level": "concat", 00:08:20.964 "superblock": false, 00:08:20.964 "num_base_bdevs": 3, 00:08:20.964 "num_base_bdevs_discovered": 2, 00:08:20.965 "num_base_bdevs_operational": 3, 00:08:20.965 "base_bdevs_list": [ 00:08:20.965 { 00:08:20.965 "name": "BaseBdev1", 00:08:20.965 "uuid": "d53e0c31-69e5-4c1b-9ef5-036c7a73f3cf", 00:08:20.965 "is_configured": true, 00:08:20.965 "data_offset": 0, 00:08:20.965 "data_size": 65536 00:08:20.965 }, 00:08:20.965 { 00:08:20.965 "name": null, 00:08:20.965 "uuid": "27cb9e15-00b0-4322-bf96-855c6197c7da", 00:08:20.965 "is_configured": false, 00:08:20.965 "data_offset": 0, 00:08:20.965 "data_size": 65536 00:08:20.965 }, 00:08:20.965 { 00:08:20.965 "name": "BaseBdev3", 00:08:20.965 "uuid": "c36f7e54-f9fd-469d-b36a-1a7efa28373f", 00:08:20.965 "is_configured": true, 00:08:20.965 "data_offset": 0, 00:08:20.965 "data_size": 65536 00:08:20.965 } 00:08:20.965 ] 00:08:20.965 }' 00:08:20.965 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.965 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.532 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.533 [2024-11-26 13:21:09.903634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.533 "name": "Existed_Raid", 00:08:21.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.533 "strip_size_kb": 64, 00:08:21.533 "state": "configuring", 00:08:21.533 "raid_level": "concat", 00:08:21.533 "superblock": false, 00:08:21.533 "num_base_bdevs": 3, 00:08:21.533 "num_base_bdevs_discovered": 1, 00:08:21.533 "num_base_bdevs_operational": 3, 00:08:21.533 "base_bdevs_list": [ 00:08:21.533 { 00:08:21.533 "name": "BaseBdev1", 00:08:21.533 "uuid": "d53e0c31-69e5-4c1b-9ef5-036c7a73f3cf", 00:08:21.533 "is_configured": true, 00:08:21.533 "data_offset": 0, 00:08:21.533 "data_size": 65536 00:08:21.533 }, 00:08:21.533 { 00:08:21.533 "name": null, 00:08:21.533 "uuid": "27cb9e15-00b0-4322-bf96-855c6197c7da", 00:08:21.533 "is_configured": false, 00:08:21.533 "data_offset": 0, 00:08:21.533 "data_size": 65536 00:08:21.533 }, 00:08:21.533 { 00:08:21.533 "name": null, 00:08:21.533 "uuid": "c36f7e54-f9fd-469d-b36a-1a7efa28373f", 00:08:21.533 "is_configured": false, 00:08:21.533 "data_offset": 0, 00:08:21.533 "data_size": 65536 00:08:21.533 } 00:08:21.533 ] 00:08:21.533 }' 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.533 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.102 [2024-11-26 13:21:10.459814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.102 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.102 "name": "Existed_Raid", 00:08:22.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.102 "strip_size_kb": 64, 00:08:22.103 "state": "configuring", 00:08:22.103 "raid_level": "concat", 00:08:22.103 "superblock": false, 00:08:22.103 "num_base_bdevs": 3, 00:08:22.103 "num_base_bdevs_discovered": 2, 00:08:22.103 "num_base_bdevs_operational": 3, 00:08:22.103 "base_bdevs_list": [ 00:08:22.103 { 00:08:22.103 "name": "BaseBdev1", 00:08:22.103 "uuid": "d53e0c31-69e5-4c1b-9ef5-036c7a73f3cf", 00:08:22.103 "is_configured": true, 00:08:22.103 "data_offset": 0, 00:08:22.103 "data_size": 65536 00:08:22.103 }, 00:08:22.103 { 00:08:22.103 "name": null, 00:08:22.103 "uuid": "27cb9e15-00b0-4322-bf96-855c6197c7da", 00:08:22.103 "is_configured": false, 00:08:22.103 "data_offset": 0, 00:08:22.103 "data_size": 65536 00:08:22.103 }, 00:08:22.103 { 00:08:22.103 "name": "BaseBdev3", 00:08:22.103 "uuid": "c36f7e54-f9fd-469d-b36a-1a7efa28373f", 00:08:22.103 "is_configured": true, 00:08:22.103 "data_offset": 0, 00:08:22.103 "data_size": 65536 00:08:22.103 } 00:08:22.103 ] 00:08:22.103 }' 00:08:22.103 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.103 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:22.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.671 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.671 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.671 [2024-11-26 13:21:11.035959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.671 "name": "Existed_Raid", 00:08:22.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.671 "strip_size_kb": 64, 00:08:22.671 "state": "configuring", 00:08:22.671 "raid_level": "concat", 00:08:22.671 "superblock": false, 00:08:22.671 "num_base_bdevs": 3, 00:08:22.671 "num_base_bdevs_discovered": 1, 00:08:22.671 "num_base_bdevs_operational": 3, 00:08:22.671 "base_bdevs_list": [ 00:08:22.671 { 00:08:22.671 "name": null, 00:08:22.671 "uuid": "d53e0c31-69e5-4c1b-9ef5-036c7a73f3cf", 00:08:22.671 "is_configured": false, 00:08:22.671 "data_offset": 0, 00:08:22.671 "data_size": 65536 00:08:22.671 }, 00:08:22.671 { 00:08:22.671 "name": null, 00:08:22.671 "uuid": "27cb9e15-00b0-4322-bf96-855c6197c7da", 00:08:22.671 "is_configured": false, 00:08:22.671 "data_offset": 0, 00:08:22.671 "data_size": 65536 00:08:22.671 }, 00:08:22.671 { 00:08:22.671 "name": "BaseBdev3", 00:08:22.671 "uuid": "c36f7e54-f9fd-469d-b36a-1a7efa28373f", 00:08:22.671 "is_configured": true, 00:08:22.671 "data_offset": 0, 00:08:22.671 "data_size": 65536 00:08:22.671 } 00:08:22.671 ] 00:08:22.671 }' 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.671 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.239 [2024-11-26 13:21:11.658110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.239 "name": "Existed_Raid", 00:08:23.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.239 "strip_size_kb": 64, 00:08:23.239 "state": "configuring", 00:08:23.239 "raid_level": "concat", 00:08:23.239 "superblock": false, 00:08:23.239 "num_base_bdevs": 3, 00:08:23.239 "num_base_bdevs_discovered": 2, 00:08:23.239 "num_base_bdevs_operational": 3, 00:08:23.239 "base_bdevs_list": [ 00:08:23.239 { 00:08:23.239 "name": null, 00:08:23.239 "uuid": "d53e0c31-69e5-4c1b-9ef5-036c7a73f3cf", 00:08:23.239 "is_configured": false, 00:08:23.239 "data_offset": 0, 00:08:23.239 "data_size": 65536 00:08:23.239 }, 00:08:23.239 { 00:08:23.239 "name": "BaseBdev2", 00:08:23.239 "uuid": "27cb9e15-00b0-4322-bf96-855c6197c7da", 00:08:23.239 "is_configured": true, 00:08:23.239 "data_offset": 0, 00:08:23.239 "data_size": 65536 00:08:23.239 }, 00:08:23.239 { 00:08:23.239 "name": "BaseBdev3", 00:08:23.239 "uuid": "c36f7e54-f9fd-469d-b36a-1a7efa28373f", 00:08:23.239 "is_configured": true, 00:08:23.239 "data_offset": 0, 00:08:23.239 "data_size": 65536 00:08:23.239 } 00:08:23.239 ] 00:08:23.239 }' 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.239 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d53e0c31-69e5-4c1b-9ef5-036c7a73f3cf 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.807 [2024-11-26 13:21:12.271142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:23.807 [2024-11-26 13:21:12.271181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:23.807 [2024-11-26 13:21:12.271194] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:23.807 [2024-11-26 13:21:12.271471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:23.807 [2024-11-26 13:21:12.271665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:23.807 [2024-11-26 13:21:12.271679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:23.807 [2024-11-26 13:21:12.271919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.807 NewBaseBdev 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.807 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.808 [ 00:08:23.808 { 00:08:23.808 "name": "NewBaseBdev", 00:08:23.808 "aliases": [ 00:08:23.808 "d53e0c31-69e5-4c1b-9ef5-036c7a73f3cf" 00:08:23.808 ], 00:08:23.808 "product_name": "Malloc disk", 00:08:23.808 "block_size": 512, 00:08:23.808 "num_blocks": 65536, 00:08:23.808 "uuid": "d53e0c31-69e5-4c1b-9ef5-036c7a73f3cf", 00:08:23.808 "assigned_rate_limits": { 00:08:23.808 "rw_ios_per_sec": 0, 00:08:23.808 "rw_mbytes_per_sec": 0, 00:08:23.808 "r_mbytes_per_sec": 0, 00:08:23.808 "w_mbytes_per_sec": 0 00:08:23.808 }, 00:08:23.808 "claimed": true, 00:08:23.808 "claim_type": "exclusive_write", 00:08:23.808 "zoned": false, 00:08:23.808 "supported_io_types": { 00:08:23.808 "read": true, 00:08:23.808 "write": true, 00:08:23.808 "unmap": true, 00:08:23.808 "flush": true, 00:08:23.808 "reset": true, 00:08:23.808 "nvme_admin": false, 00:08:23.808 "nvme_io": false, 00:08:23.808 "nvme_io_md": false, 00:08:23.808 "write_zeroes": true, 00:08:23.808 "zcopy": true, 00:08:23.808 "get_zone_info": false, 00:08:23.808 "zone_management": false, 00:08:23.808 "zone_append": false, 00:08:23.808 "compare": false, 00:08:23.808 "compare_and_write": false, 00:08:23.808 "abort": true, 00:08:23.808 "seek_hole": false, 00:08:23.808 "seek_data": false, 00:08:23.808 "copy": true, 00:08:23.808 "nvme_iov_md": false 00:08:23.808 }, 00:08:23.808 "memory_domains": [ 00:08:23.808 { 00:08:23.808 "dma_device_id": "system", 00:08:23.808 "dma_device_type": 1 00:08:23.808 }, 00:08:23.808 { 00:08:23.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.808 "dma_device_type": 2 00:08:23.808 } 00:08:23.808 ], 00:08:23.808 "driver_specific": {} 00:08:23.808 } 00:08:23.808 ] 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.808 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.067 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.067 "name": "Existed_Raid", 00:08:24.067 "uuid": "cac91d61-362b-472c-8b21-70c0b6e75670", 00:08:24.067 "strip_size_kb": 64, 00:08:24.067 "state": "online", 00:08:24.067 "raid_level": "concat", 00:08:24.067 "superblock": false, 00:08:24.067 "num_base_bdevs": 3, 00:08:24.067 "num_base_bdevs_discovered": 3, 00:08:24.067 "num_base_bdevs_operational": 3, 00:08:24.067 "base_bdevs_list": [ 00:08:24.067 { 00:08:24.067 "name": "NewBaseBdev", 00:08:24.067 "uuid": "d53e0c31-69e5-4c1b-9ef5-036c7a73f3cf", 00:08:24.067 "is_configured": true, 00:08:24.067 "data_offset": 0, 00:08:24.067 "data_size": 65536 00:08:24.067 }, 00:08:24.067 { 00:08:24.067 "name": "BaseBdev2", 00:08:24.067 "uuid": "27cb9e15-00b0-4322-bf96-855c6197c7da", 00:08:24.067 "is_configured": true, 00:08:24.067 "data_offset": 0, 00:08:24.067 "data_size": 65536 00:08:24.067 }, 00:08:24.067 { 00:08:24.067 "name": "BaseBdev3", 00:08:24.067 "uuid": "c36f7e54-f9fd-469d-b36a-1a7efa28373f", 00:08:24.067 "is_configured": true, 00:08:24.067 "data_offset": 0, 00:08:24.067 "data_size": 65536 00:08:24.067 } 00:08:24.067 ] 00:08:24.067 }' 00:08:24.067 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.067 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.327 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:24.327 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:24.327 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.327 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.327 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.327 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.327 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:24.327 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.327 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.327 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.327 [2024-11-26 13:21:12.827580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.327 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.327 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.327 "name": "Existed_Raid", 00:08:24.327 "aliases": [ 00:08:24.327 "cac91d61-362b-472c-8b21-70c0b6e75670" 00:08:24.327 ], 00:08:24.327 "product_name": "Raid Volume", 00:08:24.327 "block_size": 512, 00:08:24.327 "num_blocks": 196608, 00:08:24.327 "uuid": "cac91d61-362b-472c-8b21-70c0b6e75670", 00:08:24.327 "assigned_rate_limits": { 00:08:24.327 "rw_ios_per_sec": 0, 00:08:24.327 "rw_mbytes_per_sec": 0, 00:08:24.327 "r_mbytes_per_sec": 0, 00:08:24.327 "w_mbytes_per_sec": 0 00:08:24.327 }, 00:08:24.327 "claimed": false, 00:08:24.327 "zoned": false, 00:08:24.327 "supported_io_types": { 00:08:24.327 "read": true, 00:08:24.327 "write": true, 00:08:24.327 "unmap": true, 00:08:24.327 "flush": true, 00:08:24.327 "reset": true, 00:08:24.327 "nvme_admin": false, 00:08:24.327 "nvme_io": false, 00:08:24.327 "nvme_io_md": false, 00:08:24.327 "write_zeroes": true, 00:08:24.327 "zcopy": false, 00:08:24.327 "get_zone_info": false, 00:08:24.327 "zone_management": false, 00:08:24.327 "zone_append": false, 00:08:24.327 "compare": false, 00:08:24.327 "compare_and_write": false, 00:08:24.327 "abort": false, 00:08:24.327 "seek_hole": false, 00:08:24.327 "seek_data": false, 00:08:24.327 "copy": false, 00:08:24.327 "nvme_iov_md": false 00:08:24.327 }, 00:08:24.327 "memory_domains": [ 00:08:24.327 { 00:08:24.327 "dma_device_id": "system", 00:08:24.327 "dma_device_type": 1 00:08:24.327 }, 00:08:24.327 { 00:08:24.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.327 "dma_device_type": 2 00:08:24.327 }, 00:08:24.327 { 00:08:24.327 "dma_device_id": "system", 00:08:24.327 "dma_device_type": 1 00:08:24.327 }, 00:08:24.327 { 00:08:24.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.327 "dma_device_type": 2 00:08:24.327 }, 00:08:24.327 { 00:08:24.327 "dma_device_id": "system", 00:08:24.327 "dma_device_type": 1 00:08:24.327 }, 00:08:24.327 { 00:08:24.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.327 "dma_device_type": 2 00:08:24.327 } 00:08:24.327 ], 00:08:24.327 "driver_specific": { 00:08:24.327 "raid": { 00:08:24.327 "uuid": "cac91d61-362b-472c-8b21-70c0b6e75670", 00:08:24.327 "strip_size_kb": 64, 00:08:24.327 "state": "online", 00:08:24.327 "raid_level": "concat", 00:08:24.327 "superblock": false, 00:08:24.327 "num_base_bdevs": 3, 00:08:24.327 "num_base_bdevs_discovered": 3, 00:08:24.327 "num_base_bdevs_operational": 3, 00:08:24.327 "base_bdevs_list": [ 00:08:24.327 { 00:08:24.327 "name": "NewBaseBdev", 00:08:24.327 "uuid": "d53e0c31-69e5-4c1b-9ef5-036c7a73f3cf", 00:08:24.327 "is_configured": true, 00:08:24.327 "data_offset": 0, 00:08:24.327 "data_size": 65536 00:08:24.327 }, 00:08:24.327 { 00:08:24.327 "name": "BaseBdev2", 00:08:24.327 "uuid": "27cb9e15-00b0-4322-bf96-855c6197c7da", 00:08:24.327 "is_configured": true, 00:08:24.327 "data_offset": 0, 00:08:24.328 "data_size": 65536 00:08:24.328 }, 00:08:24.328 { 00:08:24.328 "name": "BaseBdev3", 00:08:24.328 "uuid": "c36f7e54-f9fd-469d-b36a-1a7efa28373f", 00:08:24.328 "is_configured": true, 00:08:24.328 "data_offset": 0, 00:08:24.328 "data_size": 65536 00:08:24.328 } 00:08:24.328 ] 00:08:24.328 } 00:08:24.328 } 00:08:24.328 }' 00:08:24.328 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.588 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:24.588 BaseBdev2 00:08:24.588 BaseBdev3' 00:08:24.588 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.588 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.588 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.588 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:24.588 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.588 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.588 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.588 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.588 [2024-11-26 13:21:13.127377] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:24.588 [2024-11-26 13:21:13.127400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.588 [2024-11-26 13:21:13.127460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.588 [2024-11-26 13:21:13.127511] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.588 [2024-11-26 13:21:13.127534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65122 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65122 ']' 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65122 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.588 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65122 00:08:24.848 killing process with pid 65122 00:08:24.848 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.848 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.848 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65122' 00:08:24.848 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65122 00:08:24.848 [2024-11-26 13:21:13.166017] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.848 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65122 00:08:24.848 [2024-11-26 13:21:13.365016] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:25.783 00:08:25.783 real 0m11.144s 00:08:25.783 user 0m18.738s 00:08:25.783 sys 0m1.569s 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.783 ************************************ 00:08:25.783 END TEST raid_state_function_test 00:08:25.783 ************************************ 00:08:25.783 13:21:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:25.783 13:21:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:25.783 13:21:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.783 13:21:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.783 ************************************ 00:08:25.783 START TEST raid_state_function_test_sb 00:08:25.783 ************************************ 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:25.783 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=65750 00:08:25.784 Process raid pid: 65750 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65750' 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 65750 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 65750 ']' 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.784 13:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.784 [2024-11-26 13:21:14.329867] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:08:25.784 [2024-11-26 13:21:14.330024] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.042 [2024-11-26 13:21:14.491421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.043 [2024-11-26 13:21:14.588421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.301 [2024-11-26 13:21:14.757939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.301 [2024-11-26 13:21:14.757995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.869 [2024-11-26 13:21:15.236902] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:26.869 [2024-11-26 13:21:15.236958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:26.869 [2024-11-26 13:21:15.236973] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.869 [2024-11-26 13:21:15.236987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.869 [2024-11-26 13:21:15.236994] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:26.869 [2024-11-26 13:21:15.237006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.869 "name": "Existed_Raid", 00:08:26.869 "uuid": "8d370212-50af-41f3-8546-7fc5a5939361", 00:08:26.869 "strip_size_kb": 64, 00:08:26.869 "state": "configuring", 00:08:26.869 "raid_level": "concat", 00:08:26.869 "superblock": true, 00:08:26.869 "num_base_bdevs": 3, 00:08:26.869 "num_base_bdevs_discovered": 0, 00:08:26.869 "num_base_bdevs_operational": 3, 00:08:26.869 "base_bdevs_list": [ 00:08:26.869 { 00:08:26.869 "name": "BaseBdev1", 00:08:26.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.869 "is_configured": false, 00:08:26.869 "data_offset": 0, 00:08:26.869 "data_size": 0 00:08:26.869 }, 00:08:26.869 { 00:08:26.869 "name": "BaseBdev2", 00:08:26.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.869 "is_configured": false, 00:08:26.869 "data_offset": 0, 00:08:26.869 "data_size": 0 00:08:26.869 }, 00:08:26.869 { 00:08:26.869 "name": "BaseBdev3", 00:08:26.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.869 "is_configured": false, 00:08:26.869 "data_offset": 0, 00:08:26.869 "data_size": 0 00:08:26.869 } 00:08:26.869 ] 00:08:26.869 }' 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.869 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.437 [2024-11-26 13:21:15.760922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.437 [2024-11-26 13:21:15.760952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.437 [2024-11-26 13:21:15.772934] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.437 [2024-11-26 13:21:15.773114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.437 [2024-11-26 13:21:15.773222] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.437 [2024-11-26 13:21:15.773393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.437 [2024-11-26 13:21:15.773493] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:27.437 [2024-11-26 13:21:15.773608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.437 [2024-11-26 13:21:15.815301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.437 BaseBdev1 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.437 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.438 [ 00:08:27.438 { 00:08:27.438 "name": "BaseBdev1", 00:08:27.438 "aliases": [ 00:08:27.438 "00eb204e-2416-4a5b-b254-b7454ee41459" 00:08:27.438 ], 00:08:27.438 "product_name": "Malloc disk", 00:08:27.438 "block_size": 512, 00:08:27.438 "num_blocks": 65536, 00:08:27.438 "uuid": "00eb204e-2416-4a5b-b254-b7454ee41459", 00:08:27.438 "assigned_rate_limits": { 00:08:27.438 "rw_ios_per_sec": 0, 00:08:27.438 "rw_mbytes_per_sec": 0, 00:08:27.438 "r_mbytes_per_sec": 0, 00:08:27.438 "w_mbytes_per_sec": 0 00:08:27.438 }, 00:08:27.438 "claimed": true, 00:08:27.438 "claim_type": "exclusive_write", 00:08:27.438 "zoned": false, 00:08:27.438 "supported_io_types": { 00:08:27.438 "read": true, 00:08:27.438 "write": true, 00:08:27.438 "unmap": true, 00:08:27.438 "flush": true, 00:08:27.438 "reset": true, 00:08:27.438 "nvme_admin": false, 00:08:27.438 "nvme_io": false, 00:08:27.438 "nvme_io_md": false, 00:08:27.438 "write_zeroes": true, 00:08:27.438 "zcopy": true, 00:08:27.438 "get_zone_info": false, 00:08:27.438 "zone_management": false, 00:08:27.438 "zone_append": false, 00:08:27.438 "compare": false, 00:08:27.438 "compare_and_write": false, 00:08:27.438 "abort": true, 00:08:27.438 "seek_hole": false, 00:08:27.438 "seek_data": false, 00:08:27.438 "copy": true, 00:08:27.438 "nvme_iov_md": false 00:08:27.438 }, 00:08:27.438 "memory_domains": [ 00:08:27.438 { 00:08:27.438 "dma_device_id": "system", 00:08:27.438 "dma_device_type": 1 00:08:27.438 }, 00:08:27.438 { 00:08:27.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.438 "dma_device_type": 2 00:08:27.438 } 00:08:27.438 ], 00:08:27.438 "driver_specific": {} 00:08:27.438 } 00:08:27.438 ] 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.438 "name": "Existed_Raid", 00:08:27.438 "uuid": "d1cb2273-9f2e-4d3d-b0a6-f38b0309ef1d", 00:08:27.438 "strip_size_kb": 64, 00:08:27.438 "state": "configuring", 00:08:27.438 "raid_level": "concat", 00:08:27.438 "superblock": true, 00:08:27.438 "num_base_bdevs": 3, 00:08:27.438 "num_base_bdevs_discovered": 1, 00:08:27.438 "num_base_bdevs_operational": 3, 00:08:27.438 "base_bdevs_list": [ 00:08:27.438 { 00:08:27.438 "name": "BaseBdev1", 00:08:27.438 "uuid": "00eb204e-2416-4a5b-b254-b7454ee41459", 00:08:27.438 "is_configured": true, 00:08:27.438 "data_offset": 2048, 00:08:27.438 "data_size": 63488 00:08:27.438 }, 00:08:27.438 { 00:08:27.438 "name": "BaseBdev2", 00:08:27.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.438 "is_configured": false, 00:08:27.438 "data_offset": 0, 00:08:27.438 "data_size": 0 00:08:27.438 }, 00:08:27.438 { 00:08:27.438 "name": "BaseBdev3", 00:08:27.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.438 "is_configured": false, 00:08:27.438 "data_offset": 0, 00:08:27.438 "data_size": 0 00:08:27.438 } 00:08:27.438 ] 00:08:27.438 }' 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.438 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.007 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.007 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.007 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.007 [2024-11-26 13:21:16.367479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.007 [2024-11-26 13:21:16.367515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:28.007 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.007 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:28.007 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.008 [2024-11-26 13:21:16.375544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.008 [2024-11-26 13:21:16.377660] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.008 [2024-11-26 13:21:16.377707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.008 [2024-11-26 13:21:16.377721] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:28.008 [2024-11-26 13:21:16.377734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.008 "name": "Existed_Raid", 00:08:28.008 "uuid": "5a2421a6-43b6-4ad2-8cef-ee6823ccd289", 00:08:28.008 "strip_size_kb": 64, 00:08:28.008 "state": "configuring", 00:08:28.008 "raid_level": "concat", 00:08:28.008 "superblock": true, 00:08:28.008 "num_base_bdevs": 3, 00:08:28.008 "num_base_bdevs_discovered": 1, 00:08:28.008 "num_base_bdevs_operational": 3, 00:08:28.008 "base_bdevs_list": [ 00:08:28.008 { 00:08:28.008 "name": "BaseBdev1", 00:08:28.008 "uuid": "00eb204e-2416-4a5b-b254-b7454ee41459", 00:08:28.008 "is_configured": true, 00:08:28.008 "data_offset": 2048, 00:08:28.008 "data_size": 63488 00:08:28.008 }, 00:08:28.008 { 00:08:28.008 "name": "BaseBdev2", 00:08:28.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.008 "is_configured": false, 00:08:28.008 "data_offset": 0, 00:08:28.008 "data_size": 0 00:08:28.008 }, 00:08:28.008 { 00:08:28.008 "name": "BaseBdev3", 00:08:28.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.008 "is_configured": false, 00:08:28.008 "data_offset": 0, 00:08:28.008 "data_size": 0 00:08:28.008 } 00:08:28.008 ] 00:08:28.008 }' 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.008 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.577 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:28.577 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.577 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.577 [2024-11-26 13:21:16.891794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.577 BaseBdev2 00:08:28.577 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.577 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:28.577 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:28.577 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.577 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:28.577 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.578 [ 00:08:28.578 { 00:08:28.578 "name": "BaseBdev2", 00:08:28.578 "aliases": [ 00:08:28.578 "9d001342-823f-41cd-8fb4-98cc01802770" 00:08:28.578 ], 00:08:28.578 "product_name": "Malloc disk", 00:08:28.578 "block_size": 512, 00:08:28.578 "num_blocks": 65536, 00:08:28.578 "uuid": "9d001342-823f-41cd-8fb4-98cc01802770", 00:08:28.578 "assigned_rate_limits": { 00:08:28.578 "rw_ios_per_sec": 0, 00:08:28.578 "rw_mbytes_per_sec": 0, 00:08:28.578 "r_mbytes_per_sec": 0, 00:08:28.578 "w_mbytes_per_sec": 0 00:08:28.578 }, 00:08:28.578 "claimed": true, 00:08:28.578 "claim_type": "exclusive_write", 00:08:28.578 "zoned": false, 00:08:28.578 "supported_io_types": { 00:08:28.578 "read": true, 00:08:28.578 "write": true, 00:08:28.578 "unmap": true, 00:08:28.578 "flush": true, 00:08:28.578 "reset": true, 00:08:28.578 "nvme_admin": false, 00:08:28.578 "nvme_io": false, 00:08:28.578 "nvme_io_md": false, 00:08:28.578 "write_zeroes": true, 00:08:28.578 "zcopy": true, 00:08:28.578 "get_zone_info": false, 00:08:28.578 "zone_management": false, 00:08:28.578 "zone_append": false, 00:08:28.578 "compare": false, 00:08:28.578 "compare_and_write": false, 00:08:28.578 "abort": true, 00:08:28.578 "seek_hole": false, 00:08:28.578 "seek_data": false, 00:08:28.578 "copy": true, 00:08:28.578 "nvme_iov_md": false 00:08:28.578 }, 00:08:28.578 "memory_domains": [ 00:08:28.578 { 00:08:28.578 "dma_device_id": "system", 00:08:28.578 "dma_device_type": 1 00:08:28.578 }, 00:08:28.578 { 00:08:28.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.578 "dma_device_type": 2 00:08:28.578 } 00:08:28.578 ], 00:08:28.578 "driver_specific": {} 00:08:28.578 } 00:08:28.578 ] 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.578 "name": "Existed_Raid", 00:08:28.578 "uuid": "5a2421a6-43b6-4ad2-8cef-ee6823ccd289", 00:08:28.578 "strip_size_kb": 64, 00:08:28.578 "state": "configuring", 00:08:28.578 "raid_level": "concat", 00:08:28.578 "superblock": true, 00:08:28.578 "num_base_bdevs": 3, 00:08:28.578 "num_base_bdevs_discovered": 2, 00:08:28.578 "num_base_bdevs_operational": 3, 00:08:28.578 "base_bdevs_list": [ 00:08:28.578 { 00:08:28.578 "name": "BaseBdev1", 00:08:28.578 "uuid": "00eb204e-2416-4a5b-b254-b7454ee41459", 00:08:28.578 "is_configured": true, 00:08:28.578 "data_offset": 2048, 00:08:28.578 "data_size": 63488 00:08:28.578 }, 00:08:28.578 { 00:08:28.578 "name": "BaseBdev2", 00:08:28.578 "uuid": "9d001342-823f-41cd-8fb4-98cc01802770", 00:08:28.578 "is_configured": true, 00:08:28.578 "data_offset": 2048, 00:08:28.578 "data_size": 63488 00:08:28.578 }, 00:08:28.578 { 00:08:28.578 "name": "BaseBdev3", 00:08:28.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.578 "is_configured": false, 00:08:28.578 "data_offset": 0, 00:08:28.578 "data_size": 0 00:08:28.578 } 00:08:28.578 ] 00:08:28.578 }' 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.578 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.146 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:29.146 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.146 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.146 [2024-11-26 13:21:17.486135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:29.146 [2024-11-26 13:21:17.486424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:29.146 [2024-11-26 13:21:17.486452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:29.146 BaseBdev3 00:08:29.146 [2024-11-26 13:21:17.486828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:29.146 [2024-11-26 13:21:17.487021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:29.146 [2024-11-26 13:21:17.487038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:29.146 [2024-11-26 13:21:17.487214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.146 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.146 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:29.146 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:29.146 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.146 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:29.146 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.146 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.146 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:29.146 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.146 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.146 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.147 [ 00:08:29.147 { 00:08:29.147 "name": "BaseBdev3", 00:08:29.147 "aliases": [ 00:08:29.147 "fd97205c-20d9-4973-8d81-b2e59bc396e9" 00:08:29.147 ], 00:08:29.147 "product_name": "Malloc disk", 00:08:29.147 "block_size": 512, 00:08:29.147 "num_blocks": 65536, 00:08:29.147 "uuid": "fd97205c-20d9-4973-8d81-b2e59bc396e9", 00:08:29.147 "assigned_rate_limits": { 00:08:29.147 "rw_ios_per_sec": 0, 00:08:29.147 "rw_mbytes_per_sec": 0, 00:08:29.147 "r_mbytes_per_sec": 0, 00:08:29.147 "w_mbytes_per_sec": 0 00:08:29.147 }, 00:08:29.147 "claimed": true, 00:08:29.147 "claim_type": "exclusive_write", 00:08:29.147 "zoned": false, 00:08:29.147 "supported_io_types": { 00:08:29.147 "read": true, 00:08:29.147 "write": true, 00:08:29.147 "unmap": true, 00:08:29.147 "flush": true, 00:08:29.147 "reset": true, 00:08:29.147 "nvme_admin": false, 00:08:29.147 "nvme_io": false, 00:08:29.147 "nvme_io_md": false, 00:08:29.147 "write_zeroes": true, 00:08:29.147 "zcopy": true, 00:08:29.147 "get_zone_info": false, 00:08:29.147 "zone_management": false, 00:08:29.147 "zone_append": false, 00:08:29.147 "compare": false, 00:08:29.147 "compare_and_write": false, 00:08:29.147 "abort": true, 00:08:29.147 "seek_hole": false, 00:08:29.147 "seek_data": false, 00:08:29.147 "copy": true, 00:08:29.147 "nvme_iov_md": false 00:08:29.147 }, 00:08:29.147 "memory_domains": [ 00:08:29.147 { 00:08:29.147 "dma_device_id": "system", 00:08:29.147 "dma_device_type": 1 00:08:29.147 }, 00:08:29.147 { 00:08:29.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.147 "dma_device_type": 2 00:08:29.147 } 00:08:29.147 ], 00:08:29.147 "driver_specific": {} 00:08:29.147 } 00:08:29.147 ] 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.147 "name": "Existed_Raid", 00:08:29.147 "uuid": "5a2421a6-43b6-4ad2-8cef-ee6823ccd289", 00:08:29.147 "strip_size_kb": 64, 00:08:29.147 "state": "online", 00:08:29.147 "raid_level": "concat", 00:08:29.147 "superblock": true, 00:08:29.147 "num_base_bdevs": 3, 00:08:29.147 "num_base_bdevs_discovered": 3, 00:08:29.147 "num_base_bdevs_operational": 3, 00:08:29.147 "base_bdevs_list": [ 00:08:29.147 { 00:08:29.147 "name": "BaseBdev1", 00:08:29.147 "uuid": "00eb204e-2416-4a5b-b254-b7454ee41459", 00:08:29.147 "is_configured": true, 00:08:29.147 "data_offset": 2048, 00:08:29.147 "data_size": 63488 00:08:29.147 }, 00:08:29.147 { 00:08:29.147 "name": "BaseBdev2", 00:08:29.147 "uuid": "9d001342-823f-41cd-8fb4-98cc01802770", 00:08:29.147 "is_configured": true, 00:08:29.147 "data_offset": 2048, 00:08:29.147 "data_size": 63488 00:08:29.147 }, 00:08:29.147 { 00:08:29.147 "name": "BaseBdev3", 00:08:29.147 "uuid": "fd97205c-20d9-4973-8d81-b2e59bc396e9", 00:08:29.147 "is_configured": true, 00:08:29.147 "data_offset": 2048, 00:08:29.147 "data_size": 63488 00:08:29.147 } 00:08:29.147 ] 00:08:29.147 }' 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.147 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.714 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:29.714 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:29.714 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.714 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.714 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.714 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.714 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:29.714 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.714 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.714 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.714 [2024-11-26 13:21:18.050644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.714 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.714 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.714 "name": "Existed_Raid", 00:08:29.714 "aliases": [ 00:08:29.714 "5a2421a6-43b6-4ad2-8cef-ee6823ccd289" 00:08:29.714 ], 00:08:29.714 "product_name": "Raid Volume", 00:08:29.714 "block_size": 512, 00:08:29.714 "num_blocks": 190464, 00:08:29.714 "uuid": "5a2421a6-43b6-4ad2-8cef-ee6823ccd289", 00:08:29.714 "assigned_rate_limits": { 00:08:29.714 "rw_ios_per_sec": 0, 00:08:29.714 "rw_mbytes_per_sec": 0, 00:08:29.714 "r_mbytes_per_sec": 0, 00:08:29.714 "w_mbytes_per_sec": 0 00:08:29.714 }, 00:08:29.714 "claimed": false, 00:08:29.714 "zoned": false, 00:08:29.714 "supported_io_types": { 00:08:29.714 "read": true, 00:08:29.714 "write": true, 00:08:29.714 "unmap": true, 00:08:29.714 "flush": true, 00:08:29.714 "reset": true, 00:08:29.714 "nvme_admin": false, 00:08:29.714 "nvme_io": false, 00:08:29.714 "nvme_io_md": false, 00:08:29.714 "write_zeroes": true, 00:08:29.714 "zcopy": false, 00:08:29.714 "get_zone_info": false, 00:08:29.714 "zone_management": false, 00:08:29.714 "zone_append": false, 00:08:29.714 "compare": false, 00:08:29.714 "compare_and_write": false, 00:08:29.714 "abort": false, 00:08:29.714 "seek_hole": false, 00:08:29.714 "seek_data": false, 00:08:29.714 "copy": false, 00:08:29.714 "nvme_iov_md": false 00:08:29.714 }, 00:08:29.714 "memory_domains": [ 00:08:29.714 { 00:08:29.714 "dma_device_id": "system", 00:08:29.714 "dma_device_type": 1 00:08:29.714 }, 00:08:29.714 { 00:08:29.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.714 "dma_device_type": 2 00:08:29.714 }, 00:08:29.714 { 00:08:29.714 "dma_device_id": "system", 00:08:29.714 "dma_device_type": 1 00:08:29.714 }, 00:08:29.714 { 00:08:29.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.714 "dma_device_type": 2 00:08:29.714 }, 00:08:29.714 { 00:08:29.714 "dma_device_id": "system", 00:08:29.714 "dma_device_type": 1 00:08:29.714 }, 00:08:29.714 { 00:08:29.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.714 "dma_device_type": 2 00:08:29.714 } 00:08:29.714 ], 00:08:29.714 "driver_specific": { 00:08:29.714 "raid": { 00:08:29.714 "uuid": "5a2421a6-43b6-4ad2-8cef-ee6823ccd289", 00:08:29.714 "strip_size_kb": 64, 00:08:29.714 "state": "online", 00:08:29.714 "raid_level": "concat", 00:08:29.714 "superblock": true, 00:08:29.714 "num_base_bdevs": 3, 00:08:29.715 "num_base_bdevs_discovered": 3, 00:08:29.715 "num_base_bdevs_operational": 3, 00:08:29.715 "base_bdevs_list": [ 00:08:29.715 { 00:08:29.715 "name": "BaseBdev1", 00:08:29.715 "uuid": "00eb204e-2416-4a5b-b254-b7454ee41459", 00:08:29.715 "is_configured": true, 00:08:29.715 "data_offset": 2048, 00:08:29.715 "data_size": 63488 00:08:29.715 }, 00:08:29.715 { 00:08:29.715 "name": "BaseBdev2", 00:08:29.715 "uuid": "9d001342-823f-41cd-8fb4-98cc01802770", 00:08:29.715 "is_configured": true, 00:08:29.715 "data_offset": 2048, 00:08:29.715 "data_size": 63488 00:08:29.715 }, 00:08:29.715 { 00:08:29.715 "name": "BaseBdev3", 00:08:29.715 "uuid": "fd97205c-20d9-4973-8d81-b2e59bc396e9", 00:08:29.715 "is_configured": true, 00:08:29.715 "data_offset": 2048, 00:08:29.715 "data_size": 63488 00:08:29.715 } 00:08:29.715 ] 00:08:29.715 } 00:08:29.715 } 00:08:29.715 }' 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:29.715 BaseBdev2 00:08:29.715 BaseBdev3' 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.715 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.974 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.975 [2024-11-26 13:21:18.370482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:29.975 [2024-11-26 13:21:18.370657] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.975 [2024-11-26 13:21:18.370731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.975 "name": "Existed_Raid", 00:08:29.975 "uuid": "5a2421a6-43b6-4ad2-8cef-ee6823ccd289", 00:08:29.975 "strip_size_kb": 64, 00:08:29.975 "state": "offline", 00:08:29.975 "raid_level": "concat", 00:08:29.975 "superblock": true, 00:08:29.975 "num_base_bdevs": 3, 00:08:29.975 "num_base_bdevs_discovered": 2, 00:08:29.975 "num_base_bdevs_operational": 2, 00:08:29.975 "base_bdevs_list": [ 00:08:29.975 { 00:08:29.975 "name": null, 00:08:29.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.975 "is_configured": false, 00:08:29.975 "data_offset": 0, 00:08:29.975 "data_size": 63488 00:08:29.975 }, 00:08:29.975 { 00:08:29.975 "name": "BaseBdev2", 00:08:29.975 "uuid": "9d001342-823f-41cd-8fb4-98cc01802770", 00:08:29.975 "is_configured": true, 00:08:29.975 "data_offset": 2048, 00:08:29.975 "data_size": 63488 00:08:29.975 }, 00:08:29.975 { 00:08:29.975 "name": "BaseBdev3", 00:08:29.975 "uuid": "fd97205c-20d9-4973-8d81-b2e59bc396e9", 00:08:29.975 "is_configured": true, 00:08:29.975 "data_offset": 2048, 00:08:29.975 "data_size": 63488 00:08:29.975 } 00:08:29.975 ] 00:08:29.975 }' 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.975 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.543 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:30.543 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:30.543 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.543 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.543 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.543 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:30.543 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.543 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:30.543 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:30.543 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:30.543 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.543 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.543 [2024-11-26 13:21:19.015114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:30.543 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.543 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:30.543 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:30.543 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.543 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.543 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.543 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:30.543 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.806 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:30.806 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:30.806 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:30.806 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.806 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.806 [2024-11-26 13:21:19.141112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:30.806 [2024-11-26 13:21:19.141162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:30.806 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.806 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:30.806 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:30.806 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.807 BaseBdev2 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.807 [ 00:08:30.807 { 00:08:30.807 "name": "BaseBdev2", 00:08:30.807 "aliases": [ 00:08:30.807 "7416923f-3175-4170-ab19-a733c528ba05" 00:08:30.807 ], 00:08:30.807 "product_name": "Malloc disk", 00:08:30.807 "block_size": 512, 00:08:30.807 "num_blocks": 65536, 00:08:30.807 "uuid": "7416923f-3175-4170-ab19-a733c528ba05", 00:08:30.807 "assigned_rate_limits": { 00:08:30.807 "rw_ios_per_sec": 0, 00:08:30.807 "rw_mbytes_per_sec": 0, 00:08:30.807 "r_mbytes_per_sec": 0, 00:08:30.807 "w_mbytes_per_sec": 0 00:08:30.807 }, 00:08:30.807 "claimed": false, 00:08:30.807 "zoned": false, 00:08:30.807 "supported_io_types": { 00:08:30.807 "read": true, 00:08:30.807 "write": true, 00:08:30.807 "unmap": true, 00:08:30.807 "flush": true, 00:08:30.807 "reset": true, 00:08:30.807 "nvme_admin": false, 00:08:30.807 "nvme_io": false, 00:08:30.807 "nvme_io_md": false, 00:08:30.807 "write_zeroes": true, 00:08:30.807 "zcopy": true, 00:08:30.807 "get_zone_info": false, 00:08:30.807 "zone_management": false, 00:08:30.807 "zone_append": false, 00:08:30.807 "compare": false, 00:08:30.807 "compare_and_write": false, 00:08:30.807 "abort": true, 00:08:30.807 "seek_hole": false, 00:08:30.807 "seek_data": false, 00:08:30.807 "copy": true, 00:08:30.807 "nvme_iov_md": false 00:08:30.807 }, 00:08:30.807 "memory_domains": [ 00:08:30.807 { 00:08:30.807 "dma_device_id": "system", 00:08:30.807 "dma_device_type": 1 00:08:30.807 }, 00:08:30.807 { 00:08:30.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.807 "dma_device_type": 2 00:08:30.807 } 00:08:30.807 ], 00:08:30.807 "driver_specific": {} 00:08:30.807 } 00:08:30.807 ] 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.807 BaseBdev3 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.807 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.067 [ 00:08:31.067 { 00:08:31.067 "name": "BaseBdev3", 00:08:31.067 "aliases": [ 00:08:31.067 "7dfa8ae2-69ad-4d0a-aa8d-be37bcc4a613" 00:08:31.067 ], 00:08:31.067 "product_name": "Malloc disk", 00:08:31.067 "block_size": 512, 00:08:31.067 "num_blocks": 65536, 00:08:31.067 "uuid": "7dfa8ae2-69ad-4d0a-aa8d-be37bcc4a613", 00:08:31.067 "assigned_rate_limits": { 00:08:31.067 "rw_ios_per_sec": 0, 00:08:31.067 "rw_mbytes_per_sec": 0, 00:08:31.067 "r_mbytes_per_sec": 0, 00:08:31.067 "w_mbytes_per_sec": 0 00:08:31.067 }, 00:08:31.067 "claimed": false, 00:08:31.067 "zoned": false, 00:08:31.067 "supported_io_types": { 00:08:31.067 "read": true, 00:08:31.067 "write": true, 00:08:31.067 "unmap": true, 00:08:31.067 "flush": true, 00:08:31.067 "reset": true, 00:08:31.067 "nvme_admin": false, 00:08:31.067 "nvme_io": false, 00:08:31.067 "nvme_io_md": false, 00:08:31.067 "write_zeroes": true, 00:08:31.067 "zcopy": true, 00:08:31.067 "get_zone_info": false, 00:08:31.067 "zone_management": false, 00:08:31.067 "zone_append": false, 00:08:31.067 "compare": false, 00:08:31.067 "compare_and_write": false, 00:08:31.067 "abort": true, 00:08:31.067 "seek_hole": false, 00:08:31.067 "seek_data": false, 00:08:31.067 "copy": true, 00:08:31.067 "nvme_iov_md": false 00:08:31.067 }, 00:08:31.067 "memory_domains": [ 00:08:31.067 { 00:08:31.067 "dma_device_id": "system", 00:08:31.067 "dma_device_type": 1 00:08:31.067 }, 00:08:31.067 { 00:08:31.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.067 "dma_device_type": 2 00:08:31.067 } 00:08:31.067 ], 00:08:31.067 "driver_specific": {} 00:08:31.067 } 00:08:31.067 ] 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.067 [2024-11-26 13:21:19.389831] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.067 [2024-11-26 13:21:19.390039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.067 [2024-11-26 13:21:19.390171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.067 [2024-11-26 13:21:19.392341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.067 "name": "Existed_Raid", 00:08:31.067 "uuid": "af5aa16b-4317-4278-9d49-42e8966d0d6c", 00:08:31.067 "strip_size_kb": 64, 00:08:31.067 "state": "configuring", 00:08:31.067 "raid_level": "concat", 00:08:31.067 "superblock": true, 00:08:31.067 "num_base_bdevs": 3, 00:08:31.067 "num_base_bdevs_discovered": 2, 00:08:31.067 "num_base_bdevs_operational": 3, 00:08:31.067 "base_bdevs_list": [ 00:08:31.067 { 00:08:31.067 "name": "BaseBdev1", 00:08:31.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.067 "is_configured": false, 00:08:31.067 "data_offset": 0, 00:08:31.067 "data_size": 0 00:08:31.067 }, 00:08:31.067 { 00:08:31.067 "name": "BaseBdev2", 00:08:31.067 "uuid": "7416923f-3175-4170-ab19-a733c528ba05", 00:08:31.067 "is_configured": true, 00:08:31.067 "data_offset": 2048, 00:08:31.067 "data_size": 63488 00:08:31.067 }, 00:08:31.067 { 00:08:31.067 "name": "BaseBdev3", 00:08:31.067 "uuid": "7dfa8ae2-69ad-4d0a-aa8d-be37bcc4a613", 00:08:31.067 "is_configured": true, 00:08:31.067 "data_offset": 2048, 00:08:31.067 "data_size": 63488 00:08:31.067 } 00:08:31.067 ] 00:08:31.067 }' 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.067 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.634 [2024-11-26 13:21:19.905902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.634 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.635 "name": "Existed_Raid", 00:08:31.635 "uuid": "af5aa16b-4317-4278-9d49-42e8966d0d6c", 00:08:31.635 "strip_size_kb": 64, 00:08:31.635 "state": "configuring", 00:08:31.635 "raid_level": "concat", 00:08:31.635 "superblock": true, 00:08:31.635 "num_base_bdevs": 3, 00:08:31.635 "num_base_bdevs_discovered": 1, 00:08:31.635 "num_base_bdevs_operational": 3, 00:08:31.635 "base_bdevs_list": [ 00:08:31.635 { 00:08:31.635 "name": "BaseBdev1", 00:08:31.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.635 "is_configured": false, 00:08:31.635 "data_offset": 0, 00:08:31.635 "data_size": 0 00:08:31.635 }, 00:08:31.635 { 00:08:31.635 "name": null, 00:08:31.635 "uuid": "7416923f-3175-4170-ab19-a733c528ba05", 00:08:31.635 "is_configured": false, 00:08:31.635 "data_offset": 0, 00:08:31.635 "data_size": 63488 00:08:31.635 }, 00:08:31.635 { 00:08:31.635 "name": "BaseBdev3", 00:08:31.635 "uuid": "7dfa8ae2-69ad-4d0a-aa8d-be37bcc4a613", 00:08:31.635 "is_configured": true, 00:08:31.635 "data_offset": 2048, 00:08:31.635 "data_size": 63488 00:08:31.635 } 00:08:31.635 ] 00:08:31.635 }' 00:08:31.635 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.635 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.893 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:31.893 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.893 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.893 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.894 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.153 BaseBdev1 00:08:32.153 [2024-11-26 13:21:20.521895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.153 [ 00:08:32.153 { 00:08:32.153 "name": "BaseBdev1", 00:08:32.153 "aliases": [ 00:08:32.153 "684f9b5f-6703-45ac-94d3-475dba575e6e" 00:08:32.153 ], 00:08:32.153 "product_name": "Malloc disk", 00:08:32.153 "block_size": 512, 00:08:32.153 "num_blocks": 65536, 00:08:32.153 "uuid": "684f9b5f-6703-45ac-94d3-475dba575e6e", 00:08:32.153 "assigned_rate_limits": { 00:08:32.153 "rw_ios_per_sec": 0, 00:08:32.153 "rw_mbytes_per_sec": 0, 00:08:32.153 "r_mbytes_per_sec": 0, 00:08:32.153 "w_mbytes_per_sec": 0 00:08:32.153 }, 00:08:32.153 "claimed": true, 00:08:32.153 "claim_type": "exclusive_write", 00:08:32.153 "zoned": false, 00:08:32.153 "supported_io_types": { 00:08:32.153 "read": true, 00:08:32.153 "write": true, 00:08:32.153 "unmap": true, 00:08:32.153 "flush": true, 00:08:32.153 "reset": true, 00:08:32.153 "nvme_admin": false, 00:08:32.153 "nvme_io": false, 00:08:32.153 "nvme_io_md": false, 00:08:32.153 "write_zeroes": true, 00:08:32.153 "zcopy": true, 00:08:32.153 "get_zone_info": false, 00:08:32.153 "zone_management": false, 00:08:32.153 "zone_append": false, 00:08:32.153 "compare": false, 00:08:32.153 "compare_and_write": false, 00:08:32.153 "abort": true, 00:08:32.153 "seek_hole": false, 00:08:32.153 "seek_data": false, 00:08:32.153 "copy": true, 00:08:32.153 "nvme_iov_md": false 00:08:32.153 }, 00:08:32.153 "memory_domains": [ 00:08:32.153 { 00:08:32.153 "dma_device_id": "system", 00:08:32.153 "dma_device_type": 1 00:08:32.153 }, 00:08:32.153 { 00:08:32.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.153 "dma_device_type": 2 00:08:32.153 } 00:08:32.153 ], 00:08:32.153 "driver_specific": {} 00:08:32.153 } 00:08:32.153 ] 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.153 "name": "Existed_Raid", 00:08:32.153 "uuid": "af5aa16b-4317-4278-9d49-42e8966d0d6c", 00:08:32.153 "strip_size_kb": 64, 00:08:32.153 "state": "configuring", 00:08:32.153 "raid_level": "concat", 00:08:32.153 "superblock": true, 00:08:32.153 "num_base_bdevs": 3, 00:08:32.153 "num_base_bdevs_discovered": 2, 00:08:32.153 "num_base_bdevs_operational": 3, 00:08:32.153 "base_bdevs_list": [ 00:08:32.153 { 00:08:32.153 "name": "BaseBdev1", 00:08:32.153 "uuid": "684f9b5f-6703-45ac-94d3-475dba575e6e", 00:08:32.153 "is_configured": true, 00:08:32.153 "data_offset": 2048, 00:08:32.153 "data_size": 63488 00:08:32.153 }, 00:08:32.153 { 00:08:32.153 "name": null, 00:08:32.153 "uuid": "7416923f-3175-4170-ab19-a733c528ba05", 00:08:32.153 "is_configured": false, 00:08:32.153 "data_offset": 0, 00:08:32.153 "data_size": 63488 00:08:32.153 }, 00:08:32.153 { 00:08:32.153 "name": "BaseBdev3", 00:08:32.153 "uuid": "7dfa8ae2-69ad-4d0a-aa8d-be37bcc4a613", 00:08:32.153 "is_configured": true, 00:08:32.153 "data_offset": 2048, 00:08:32.153 "data_size": 63488 00:08:32.153 } 00:08:32.153 ] 00:08:32.153 }' 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.153 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.721 [2024-11-26 13:21:21.126033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.721 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.722 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.722 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.722 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.722 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.722 "name": "Existed_Raid", 00:08:32.722 "uuid": "af5aa16b-4317-4278-9d49-42e8966d0d6c", 00:08:32.722 "strip_size_kb": 64, 00:08:32.722 "state": "configuring", 00:08:32.722 "raid_level": "concat", 00:08:32.722 "superblock": true, 00:08:32.722 "num_base_bdevs": 3, 00:08:32.722 "num_base_bdevs_discovered": 1, 00:08:32.722 "num_base_bdevs_operational": 3, 00:08:32.722 "base_bdevs_list": [ 00:08:32.722 { 00:08:32.722 "name": "BaseBdev1", 00:08:32.722 "uuid": "684f9b5f-6703-45ac-94d3-475dba575e6e", 00:08:32.722 "is_configured": true, 00:08:32.722 "data_offset": 2048, 00:08:32.722 "data_size": 63488 00:08:32.722 }, 00:08:32.722 { 00:08:32.722 "name": null, 00:08:32.722 "uuid": "7416923f-3175-4170-ab19-a733c528ba05", 00:08:32.722 "is_configured": false, 00:08:32.722 "data_offset": 0, 00:08:32.722 "data_size": 63488 00:08:32.722 }, 00:08:32.722 { 00:08:32.722 "name": null, 00:08:32.722 "uuid": "7dfa8ae2-69ad-4d0a-aa8d-be37bcc4a613", 00:08:32.722 "is_configured": false, 00:08:32.722 "data_offset": 0, 00:08:32.722 "data_size": 63488 00:08:32.722 } 00:08:32.722 ] 00:08:32.722 }' 00:08:32.722 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.722 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.290 [2024-11-26 13:21:21.702202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.290 "name": "Existed_Raid", 00:08:33.290 "uuid": "af5aa16b-4317-4278-9d49-42e8966d0d6c", 00:08:33.290 "strip_size_kb": 64, 00:08:33.290 "state": "configuring", 00:08:33.290 "raid_level": "concat", 00:08:33.290 "superblock": true, 00:08:33.290 "num_base_bdevs": 3, 00:08:33.290 "num_base_bdevs_discovered": 2, 00:08:33.290 "num_base_bdevs_operational": 3, 00:08:33.290 "base_bdevs_list": [ 00:08:33.290 { 00:08:33.290 "name": "BaseBdev1", 00:08:33.290 "uuid": "684f9b5f-6703-45ac-94d3-475dba575e6e", 00:08:33.290 "is_configured": true, 00:08:33.290 "data_offset": 2048, 00:08:33.290 "data_size": 63488 00:08:33.290 }, 00:08:33.290 { 00:08:33.290 "name": null, 00:08:33.290 "uuid": "7416923f-3175-4170-ab19-a733c528ba05", 00:08:33.290 "is_configured": false, 00:08:33.290 "data_offset": 0, 00:08:33.290 "data_size": 63488 00:08:33.290 }, 00:08:33.290 { 00:08:33.290 "name": "BaseBdev3", 00:08:33.290 "uuid": "7dfa8ae2-69ad-4d0a-aa8d-be37bcc4a613", 00:08:33.290 "is_configured": true, 00:08:33.290 "data_offset": 2048, 00:08:33.290 "data_size": 63488 00:08:33.290 } 00:08:33.290 ] 00:08:33.290 }' 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.290 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.858 [2024-11-26 13:21:22.286373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.858 "name": "Existed_Raid", 00:08:33.858 "uuid": "af5aa16b-4317-4278-9d49-42e8966d0d6c", 00:08:33.858 "strip_size_kb": 64, 00:08:33.858 "state": "configuring", 00:08:33.858 "raid_level": "concat", 00:08:33.858 "superblock": true, 00:08:33.858 "num_base_bdevs": 3, 00:08:33.858 "num_base_bdevs_discovered": 1, 00:08:33.858 "num_base_bdevs_operational": 3, 00:08:33.858 "base_bdevs_list": [ 00:08:33.858 { 00:08:33.858 "name": null, 00:08:33.858 "uuid": "684f9b5f-6703-45ac-94d3-475dba575e6e", 00:08:33.858 "is_configured": false, 00:08:33.858 "data_offset": 0, 00:08:33.858 "data_size": 63488 00:08:33.858 }, 00:08:33.858 { 00:08:33.858 "name": null, 00:08:33.858 "uuid": "7416923f-3175-4170-ab19-a733c528ba05", 00:08:33.858 "is_configured": false, 00:08:33.858 "data_offset": 0, 00:08:33.858 "data_size": 63488 00:08:33.858 }, 00:08:33.858 { 00:08:33.858 "name": "BaseBdev3", 00:08:33.858 "uuid": "7dfa8ae2-69ad-4d0a-aa8d-be37bcc4a613", 00:08:33.858 "is_configured": true, 00:08:33.858 "data_offset": 2048, 00:08:33.858 "data_size": 63488 00:08:33.858 } 00:08:33.858 ] 00:08:33.858 }' 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.858 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.426 [2024-11-26 13:21:22.942197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.426 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.685 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.685 "name": "Existed_Raid", 00:08:34.685 "uuid": "af5aa16b-4317-4278-9d49-42e8966d0d6c", 00:08:34.685 "strip_size_kb": 64, 00:08:34.685 "state": "configuring", 00:08:34.685 "raid_level": "concat", 00:08:34.685 "superblock": true, 00:08:34.685 "num_base_bdevs": 3, 00:08:34.685 "num_base_bdevs_discovered": 2, 00:08:34.685 "num_base_bdevs_operational": 3, 00:08:34.685 "base_bdevs_list": [ 00:08:34.685 { 00:08:34.685 "name": null, 00:08:34.685 "uuid": "684f9b5f-6703-45ac-94d3-475dba575e6e", 00:08:34.685 "is_configured": false, 00:08:34.685 "data_offset": 0, 00:08:34.685 "data_size": 63488 00:08:34.685 }, 00:08:34.685 { 00:08:34.685 "name": "BaseBdev2", 00:08:34.685 "uuid": "7416923f-3175-4170-ab19-a733c528ba05", 00:08:34.685 "is_configured": true, 00:08:34.685 "data_offset": 2048, 00:08:34.685 "data_size": 63488 00:08:34.685 }, 00:08:34.685 { 00:08:34.685 "name": "BaseBdev3", 00:08:34.685 "uuid": "7dfa8ae2-69ad-4d0a-aa8d-be37bcc4a613", 00:08:34.685 "is_configured": true, 00:08:34.685 "data_offset": 2048, 00:08:34.685 "data_size": 63488 00:08:34.685 } 00:08:34.685 ] 00:08:34.685 }' 00:08:34.685 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.685 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.944 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.944 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:34.944 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.944 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.944 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 684f9b5f-6703-45ac-94d3-475dba575e6e 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.203 [2024-11-26 13:21:23.620730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:35.203 [2024-11-26 13:21:23.620935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:35.203 [2024-11-26 13:21:23.620955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:35.203 [2024-11-26 13:21:23.621197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:35.203 NewBaseBdev 00:08:35.203 [2024-11-26 13:21:23.621366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:35.203 [2024-11-26 13:21:23.621381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:35.203 [2024-11-26 13:21:23.621537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.203 [ 00:08:35.203 { 00:08:35.203 "name": "NewBaseBdev", 00:08:35.203 "aliases": [ 00:08:35.203 "684f9b5f-6703-45ac-94d3-475dba575e6e" 00:08:35.203 ], 00:08:35.203 "product_name": "Malloc disk", 00:08:35.203 "block_size": 512, 00:08:35.203 "num_blocks": 65536, 00:08:35.203 "uuid": "684f9b5f-6703-45ac-94d3-475dba575e6e", 00:08:35.203 "assigned_rate_limits": { 00:08:35.203 "rw_ios_per_sec": 0, 00:08:35.203 "rw_mbytes_per_sec": 0, 00:08:35.203 "r_mbytes_per_sec": 0, 00:08:35.203 "w_mbytes_per_sec": 0 00:08:35.203 }, 00:08:35.203 "claimed": true, 00:08:35.203 "claim_type": "exclusive_write", 00:08:35.203 "zoned": false, 00:08:35.203 "supported_io_types": { 00:08:35.203 "read": true, 00:08:35.203 "write": true, 00:08:35.203 "unmap": true, 00:08:35.203 "flush": true, 00:08:35.203 "reset": true, 00:08:35.203 "nvme_admin": false, 00:08:35.203 "nvme_io": false, 00:08:35.203 "nvme_io_md": false, 00:08:35.203 "write_zeroes": true, 00:08:35.203 "zcopy": true, 00:08:35.203 "get_zone_info": false, 00:08:35.203 "zone_management": false, 00:08:35.203 "zone_append": false, 00:08:35.203 "compare": false, 00:08:35.203 "compare_and_write": false, 00:08:35.203 "abort": true, 00:08:35.203 "seek_hole": false, 00:08:35.203 "seek_data": false, 00:08:35.203 "copy": true, 00:08:35.203 "nvme_iov_md": false 00:08:35.203 }, 00:08:35.203 "memory_domains": [ 00:08:35.203 { 00:08:35.203 "dma_device_id": "system", 00:08:35.203 "dma_device_type": 1 00:08:35.203 }, 00:08:35.203 { 00:08:35.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.203 "dma_device_type": 2 00:08:35.203 } 00:08:35.203 ], 00:08:35.203 "driver_specific": {} 00:08:35.203 } 00:08:35.203 ] 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.203 "name": "Existed_Raid", 00:08:35.203 "uuid": "af5aa16b-4317-4278-9d49-42e8966d0d6c", 00:08:35.203 "strip_size_kb": 64, 00:08:35.203 "state": "online", 00:08:35.203 "raid_level": "concat", 00:08:35.203 "superblock": true, 00:08:35.203 "num_base_bdevs": 3, 00:08:35.203 "num_base_bdevs_discovered": 3, 00:08:35.203 "num_base_bdevs_operational": 3, 00:08:35.203 "base_bdevs_list": [ 00:08:35.203 { 00:08:35.203 "name": "NewBaseBdev", 00:08:35.203 "uuid": "684f9b5f-6703-45ac-94d3-475dba575e6e", 00:08:35.203 "is_configured": true, 00:08:35.203 "data_offset": 2048, 00:08:35.203 "data_size": 63488 00:08:35.203 }, 00:08:35.203 { 00:08:35.203 "name": "BaseBdev2", 00:08:35.203 "uuid": "7416923f-3175-4170-ab19-a733c528ba05", 00:08:35.203 "is_configured": true, 00:08:35.203 "data_offset": 2048, 00:08:35.203 "data_size": 63488 00:08:35.203 }, 00:08:35.203 { 00:08:35.203 "name": "BaseBdev3", 00:08:35.203 "uuid": "7dfa8ae2-69ad-4d0a-aa8d-be37bcc4a613", 00:08:35.203 "is_configured": true, 00:08:35.203 "data_offset": 2048, 00:08:35.203 "data_size": 63488 00:08:35.203 } 00:08:35.203 ] 00:08:35.203 }' 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.203 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.772 [2024-11-26 13:21:24.185117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.772 "name": "Existed_Raid", 00:08:35.772 "aliases": [ 00:08:35.772 "af5aa16b-4317-4278-9d49-42e8966d0d6c" 00:08:35.772 ], 00:08:35.772 "product_name": "Raid Volume", 00:08:35.772 "block_size": 512, 00:08:35.772 "num_blocks": 190464, 00:08:35.772 "uuid": "af5aa16b-4317-4278-9d49-42e8966d0d6c", 00:08:35.772 "assigned_rate_limits": { 00:08:35.772 "rw_ios_per_sec": 0, 00:08:35.772 "rw_mbytes_per_sec": 0, 00:08:35.772 "r_mbytes_per_sec": 0, 00:08:35.772 "w_mbytes_per_sec": 0 00:08:35.772 }, 00:08:35.772 "claimed": false, 00:08:35.772 "zoned": false, 00:08:35.772 "supported_io_types": { 00:08:35.772 "read": true, 00:08:35.772 "write": true, 00:08:35.772 "unmap": true, 00:08:35.772 "flush": true, 00:08:35.772 "reset": true, 00:08:35.772 "nvme_admin": false, 00:08:35.772 "nvme_io": false, 00:08:35.772 "nvme_io_md": false, 00:08:35.772 "write_zeroes": true, 00:08:35.772 "zcopy": false, 00:08:35.772 "get_zone_info": false, 00:08:35.772 "zone_management": false, 00:08:35.772 "zone_append": false, 00:08:35.772 "compare": false, 00:08:35.772 "compare_and_write": false, 00:08:35.772 "abort": false, 00:08:35.772 "seek_hole": false, 00:08:35.772 "seek_data": false, 00:08:35.772 "copy": false, 00:08:35.772 "nvme_iov_md": false 00:08:35.772 }, 00:08:35.772 "memory_domains": [ 00:08:35.772 { 00:08:35.772 "dma_device_id": "system", 00:08:35.772 "dma_device_type": 1 00:08:35.772 }, 00:08:35.772 { 00:08:35.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.772 "dma_device_type": 2 00:08:35.772 }, 00:08:35.772 { 00:08:35.772 "dma_device_id": "system", 00:08:35.772 "dma_device_type": 1 00:08:35.772 }, 00:08:35.772 { 00:08:35.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.772 "dma_device_type": 2 00:08:35.772 }, 00:08:35.772 { 00:08:35.772 "dma_device_id": "system", 00:08:35.772 "dma_device_type": 1 00:08:35.772 }, 00:08:35.772 { 00:08:35.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.772 "dma_device_type": 2 00:08:35.772 } 00:08:35.772 ], 00:08:35.772 "driver_specific": { 00:08:35.772 "raid": { 00:08:35.772 "uuid": "af5aa16b-4317-4278-9d49-42e8966d0d6c", 00:08:35.772 "strip_size_kb": 64, 00:08:35.772 "state": "online", 00:08:35.772 "raid_level": "concat", 00:08:35.772 "superblock": true, 00:08:35.772 "num_base_bdevs": 3, 00:08:35.772 "num_base_bdevs_discovered": 3, 00:08:35.772 "num_base_bdevs_operational": 3, 00:08:35.772 "base_bdevs_list": [ 00:08:35.772 { 00:08:35.772 "name": "NewBaseBdev", 00:08:35.772 "uuid": "684f9b5f-6703-45ac-94d3-475dba575e6e", 00:08:35.772 "is_configured": true, 00:08:35.772 "data_offset": 2048, 00:08:35.772 "data_size": 63488 00:08:35.772 }, 00:08:35.772 { 00:08:35.772 "name": "BaseBdev2", 00:08:35.772 "uuid": "7416923f-3175-4170-ab19-a733c528ba05", 00:08:35.772 "is_configured": true, 00:08:35.772 "data_offset": 2048, 00:08:35.772 "data_size": 63488 00:08:35.772 }, 00:08:35.772 { 00:08:35.772 "name": "BaseBdev3", 00:08:35.772 "uuid": "7dfa8ae2-69ad-4d0a-aa8d-be37bcc4a613", 00:08:35.772 "is_configured": true, 00:08:35.772 "data_offset": 2048, 00:08:35.772 "data_size": 63488 00:08:35.772 } 00:08:35.772 ] 00:08:35.772 } 00:08:35.772 } 00:08:35.772 }' 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:35.772 BaseBdev2 00:08:35.772 BaseBdev3' 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.772 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.031 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:36.031 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.031 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.031 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.031 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.031 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.032 [2024-11-26 13:21:24.500937] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.032 [2024-11-26 13:21:24.500961] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.032 [2024-11-26 13:21:24.501027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.032 [2024-11-26 13:21:24.501084] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.032 [2024-11-26 13:21:24.501101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 65750 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 65750 ']' 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 65750 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65750 00:08:36.032 killing process with pid 65750 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65750' 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 65750 00:08:36.032 [2024-11-26 13:21:24.542149] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.032 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 65750 00:08:36.291 [2024-11-26 13:21:24.743133] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.226 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:37.226 00:08:37.226 real 0m11.336s 00:08:37.226 user 0m19.166s 00:08:37.226 sys 0m1.532s 00:08:37.226 ************************************ 00:08:37.226 END TEST raid_state_function_test_sb 00:08:37.226 ************************************ 00:08:37.226 13:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.226 13:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.226 13:21:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:37.226 13:21:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:37.226 13:21:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.226 13:21:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.226 ************************************ 00:08:37.226 START TEST raid_superblock_test 00:08:37.226 ************************************ 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:37.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66381 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66381 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66381 ']' 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.226 13:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.226 [2024-11-26 13:21:25.753865] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:08:37.226 [2024-11-26 13:21:25.754067] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66381 ] 00:08:37.485 [2024-11-26 13:21:25.937196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.485 [2024-11-26 13:21:26.034899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.744 [2024-11-26 13:21:26.200914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.744 [2024-11-26 13:21:26.200975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.319 malloc1 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.319 [2024-11-26 13:21:26.703925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:38.319 [2024-11-26 13:21:26.704001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.319 [2024-11-26 13:21:26.704031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:38.319 [2024-11-26 13:21:26.704044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.319 [2024-11-26 13:21:26.706400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.319 [2024-11-26 13:21:26.706441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:38.319 pt1 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.319 malloc2 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.319 [2024-11-26 13:21:26.753472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:38.319 [2024-11-26 13:21:26.753696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.319 [2024-11-26 13:21:26.753766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:38.319 [2024-11-26 13:21:26.753878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.319 [2024-11-26 13:21:26.756337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.319 [2024-11-26 13:21:26.756518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:38.319 pt2 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.319 malloc3 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.319 [2024-11-26 13:21:26.806936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:38.319 [2024-11-26 13:21:26.806994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.319 [2024-11-26 13:21:26.807021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:38.319 [2024-11-26 13:21:26.807034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.319 [2024-11-26 13:21:26.809305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.319 [2024-11-26 13:21:26.809343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:38.319 pt3 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.319 [2024-11-26 13:21:26.814989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:38.319 [2024-11-26 13:21:26.817048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:38.319 [2024-11-26 13:21:26.817124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:38.319 [2024-11-26 13:21:26.817314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:38.319 [2024-11-26 13:21:26.817334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:38.319 [2024-11-26 13:21:26.817598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:38.319 [2024-11-26 13:21:26.817782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:38.319 [2024-11-26 13:21:26.817797] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:38.319 [2024-11-26 13:21:26.817945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.319 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.320 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.320 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.320 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.320 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.320 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.320 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:38.320 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.320 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.320 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.320 "name": "raid_bdev1", 00:08:38.320 "uuid": "bab361b1-58cd-4b05-81aa-e8d9ba0642a4", 00:08:38.320 "strip_size_kb": 64, 00:08:38.320 "state": "online", 00:08:38.320 "raid_level": "concat", 00:08:38.320 "superblock": true, 00:08:38.320 "num_base_bdevs": 3, 00:08:38.320 "num_base_bdevs_discovered": 3, 00:08:38.320 "num_base_bdevs_operational": 3, 00:08:38.320 "base_bdevs_list": [ 00:08:38.320 { 00:08:38.320 "name": "pt1", 00:08:38.320 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:38.320 "is_configured": true, 00:08:38.320 "data_offset": 2048, 00:08:38.320 "data_size": 63488 00:08:38.320 }, 00:08:38.320 { 00:08:38.320 "name": "pt2", 00:08:38.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:38.320 "is_configured": true, 00:08:38.320 "data_offset": 2048, 00:08:38.320 "data_size": 63488 00:08:38.320 }, 00:08:38.320 { 00:08:38.320 "name": "pt3", 00:08:38.320 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:38.320 "is_configured": true, 00:08:38.320 "data_offset": 2048, 00:08:38.320 "data_size": 63488 00:08:38.320 } 00:08:38.320 ] 00:08:38.320 }' 00:08:38.320 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.320 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.013 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:39.013 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:39.013 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:39.013 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:39.013 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:39.013 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:39.013 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:39.013 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:39.013 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.013 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.013 [2024-11-26 13:21:27.355362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.013 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.013 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:39.013 "name": "raid_bdev1", 00:08:39.013 "aliases": [ 00:08:39.013 "bab361b1-58cd-4b05-81aa-e8d9ba0642a4" 00:08:39.013 ], 00:08:39.013 "product_name": "Raid Volume", 00:08:39.013 "block_size": 512, 00:08:39.013 "num_blocks": 190464, 00:08:39.013 "uuid": "bab361b1-58cd-4b05-81aa-e8d9ba0642a4", 00:08:39.013 "assigned_rate_limits": { 00:08:39.013 "rw_ios_per_sec": 0, 00:08:39.013 "rw_mbytes_per_sec": 0, 00:08:39.013 "r_mbytes_per_sec": 0, 00:08:39.013 "w_mbytes_per_sec": 0 00:08:39.013 }, 00:08:39.013 "claimed": false, 00:08:39.013 "zoned": false, 00:08:39.013 "supported_io_types": { 00:08:39.013 "read": true, 00:08:39.013 "write": true, 00:08:39.013 "unmap": true, 00:08:39.013 "flush": true, 00:08:39.013 "reset": true, 00:08:39.013 "nvme_admin": false, 00:08:39.013 "nvme_io": false, 00:08:39.014 "nvme_io_md": false, 00:08:39.014 "write_zeroes": true, 00:08:39.014 "zcopy": false, 00:08:39.014 "get_zone_info": false, 00:08:39.014 "zone_management": false, 00:08:39.014 "zone_append": false, 00:08:39.014 "compare": false, 00:08:39.014 "compare_and_write": false, 00:08:39.014 "abort": false, 00:08:39.014 "seek_hole": false, 00:08:39.014 "seek_data": false, 00:08:39.014 "copy": false, 00:08:39.014 "nvme_iov_md": false 00:08:39.014 }, 00:08:39.014 "memory_domains": [ 00:08:39.014 { 00:08:39.014 "dma_device_id": "system", 00:08:39.014 "dma_device_type": 1 00:08:39.014 }, 00:08:39.014 { 00:08:39.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.014 "dma_device_type": 2 00:08:39.014 }, 00:08:39.014 { 00:08:39.014 "dma_device_id": "system", 00:08:39.014 "dma_device_type": 1 00:08:39.014 }, 00:08:39.014 { 00:08:39.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.014 "dma_device_type": 2 00:08:39.014 }, 00:08:39.014 { 00:08:39.014 "dma_device_id": "system", 00:08:39.014 "dma_device_type": 1 00:08:39.014 }, 00:08:39.014 { 00:08:39.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.014 "dma_device_type": 2 00:08:39.014 } 00:08:39.014 ], 00:08:39.014 "driver_specific": { 00:08:39.014 "raid": { 00:08:39.014 "uuid": "bab361b1-58cd-4b05-81aa-e8d9ba0642a4", 00:08:39.014 "strip_size_kb": 64, 00:08:39.014 "state": "online", 00:08:39.014 "raid_level": "concat", 00:08:39.014 "superblock": true, 00:08:39.014 "num_base_bdevs": 3, 00:08:39.014 "num_base_bdevs_discovered": 3, 00:08:39.014 "num_base_bdevs_operational": 3, 00:08:39.014 "base_bdevs_list": [ 00:08:39.014 { 00:08:39.014 "name": "pt1", 00:08:39.014 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:39.014 "is_configured": true, 00:08:39.014 "data_offset": 2048, 00:08:39.014 "data_size": 63488 00:08:39.014 }, 00:08:39.014 { 00:08:39.014 "name": "pt2", 00:08:39.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:39.014 "is_configured": true, 00:08:39.014 "data_offset": 2048, 00:08:39.014 "data_size": 63488 00:08:39.014 }, 00:08:39.014 { 00:08:39.014 "name": "pt3", 00:08:39.014 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:39.014 "is_configured": true, 00:08:39.014 "data_offset": 2048, 00:08:39.014 "data_size": 63488 00:08:39.014 } 00:08:39.014 ] 00:08:39.014 } 00:08:39.014 } 00:08:39.014 }' 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:39.014 pt2 00:08:39.014 pt3' 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.014 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:39.274 [2024-11-26 13:21:27.675412] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bab361b1-58cd-4b05-81aa-e8d9ba0642a4 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bab361b1-58cd-4b05-81aa-e8d9ba0642a4 ']' 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.274 [2024-11-26 13:21:27.739117] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:39.274 [2024-11-26 13:21:27.739325] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.274 [2024-11-26 13:21:27.739418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.274 [2024-11-26 13:21:27.739484] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.274 [2024-11-26 13:21:27.739499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.274 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:39.534 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.534 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:39.534 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:39.534 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:39.534 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:39.534 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:39.534 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.534 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:39.534 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.534 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:39.534 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.534 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.534 [2024-11-26 13:21:27.887190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:39.534 [2024-11-26 13:21:27.889384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:39.534 [2024-11-26 13:21:27.889442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:39.534 [2024-11-26 13:21:27.889493] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:39.534 [2024-11-26 13:21:27.889565] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:39.534 [2024-11-26 13:21:27.889595] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:39.534 [2024-11-26 13:21:27.889634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:39.534 [2024-11-26 13:21:27.889644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:39.534 request: 00:08:39.534 { 00:08:39.534 "name": "raid_bdev1", 00:08:39.534 "raid_level": "concat", 00:08:39.534 "base_bdevs": [ 00:08:39.534 "malloc1", 00:08:39.534 "malloc2", 00:08:39.534 "malloc3" 00:08:39.534 ], 00:08:39.534 "strip_size_kb": 64, 00:08:39.534 "superblock": false, 00:08:39.534 "method": "bdev_raid_create", 00:08:39.534 "req_id": 1 00:08:39.534 } 00:08:39.534 Got JSON-RPC error response 00:08:39.534 response: 00:08:39.534 { 00:08:39.534 "code": -17, 00:08:39.534 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:39.534 } 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.535 [2024-11-26 13:21:27.951159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:39.535 [2024-11-26 13:21:27.951220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.535 [2024-11-26 13:21:27.951257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:39.535 [2024-11-26 13:21:27.951282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.535 [2024-11-26 13:21:27.953709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.535 [2024-11-26 13:21:27.953747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:39.535 [2024-11-26 13:21:27.953832] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:39.535 [2024-11-26 13:21:27.953887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:39.535 pt1 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.535 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.535 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.535 "name": "raid_bdev1", 00:08:39.535 "uuid": "bab361b1-58cd-4b05-81aa-e8d9ba0642a4", 00:08:39.535 "strip_size_kb": 64, 00:08:39.535 "state": "configuring", 00:08:39.535 "raid_level": "concat", 00:08:39.535 "superblock": true, 00:08:39.535 "num_base_bdevs": 3, 00:08:39.535 "num_base_bdevs_discovered": 1, 00:08:39.535 "num_base_bdevs_operational": 3, 00:08:39.535 "base_bdevs_list": [ 00:08:39.535 { 00:08:39.535 "name": "pt1", 00:08:39.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:39.535 "is_configured": true, 00:08:39.535 "data_offset": 2048, 00:08:39.535 "data_size": 63488 00:08:39.535 }, 00:08:39.535 { 00:08:39.535 "name": null, 00:08:39.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:39.535 "is_configured": false, 00:08:39.535 "data_offset": 2048, 00:08:39.535 "data_size": 63488 00:08:39.535 }, 00:08:39.535 { 00:08:39.535 "name": null, 00:08:39.535 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:39.535 "is_configured": false, 00:08:39.535 "data_offset": 2048, 00:08:39.535 "data_size": 63488 00:08:39.535 } 00:08:39.535 ] 00:08:39.535 }' 00:08:39.535 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.535 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.103 [2024-11-26 13:21:28.487269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:40.103 [2024-11-26 13:21:28.487329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.103 [2024-11-26 13:21:28.487351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:40.103 [2024-11-26 13:21:28.487363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.103 [2024-11-26 13:21:28.487771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.103 [2024-11-26 13:21:28.487793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:40.103 [2024-11-26 13:21:28.487861] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:40.103 [2024-11-26 13:21:28.487884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:40.103 pt2 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.103 [2024-11-26 13:21:28.495323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.103 "name": "raid_bdev1", 00:08:40.103 "uuid": "bab361b1-58cd-4b05-81aa-e8d9ba0642a4", 00:08:40.103 "strip_size_kb": 64, 00:08:40.103 "state": "configuring", 00:08:40.103 "raid_level": "concat", 00:08:40.103 "superblock": true, 00:08:40.103 "num_base_bdevs": 3, 00:08:40.103 "num_base_bdevs_discovered": 1, 00:08:40.103 "num_base_bdevs_operational": 3, 00:08:40.103 "base_bdevs_list": [ 00:08:40.103 { 00:08:40.103 "name": "pt1", 00:08:40.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:40.103 "is_configured": true, 00:08:40.103 "data_offset": 2048, 00:08:40.103 "data_size": 63488 00:08:40.103 }, 00:08:40.103 { 00:08:40.103 "name": null, 00:08:40.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.103 "is_configured": false, 00:08:40.103 "data_offset": 0, 00:08:40.103 "data_size": 63488 00:08:40.103 }, 00:08:40.103 { 00:08:40.103 "name": null, 00:08:40.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:40.103 "is_configured": false, 00:08:40.103 "data_offset": 2048, 00:08:40.103 "data_size": 63488 00:08:40.103 } 00:08:40.103 ] 00:08:40.103 }' 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.103 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.672 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.673 [2024-11-26 13:21:29.019408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:40.673 [2024-11-26 13:21:29.019475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.673 [2024-11-26 13:21:29.019493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:40.673 [2024-11-26 13:21:29.019507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.673 [2024-11-26 13:21:29.019930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.673 [2024-11-26 13:21:29.019957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:40.673 [2024-11-26 13:21:29.020020] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:40.673 [2024-11-26 13:21:29.020047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:40.673 pt2 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.673 [2024-11-26 13:21:29.027419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:40.673 [2024-11-26 13:21:29.027481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.673 [2024-11-26 13:21:29.027498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:40.673 [2024-11-26 13:21:29.027511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.673 [2024-11-26 13:21:29.027916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.673 [2024-11-26 13:21:29.027957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:40.673 [2024-11-26 13:21:29.028021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:40.673 [2024-11-26 13:21:29.028048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:40.673 [2024-11-26 13:21:29.028194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:40.673 [2024-11-26 13:21:29.028212] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:40.673 [2024-11-26 13:21:29.028506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:40.673 [2024-11-26 13:21:29.028675] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:40.673 [2024-11-26 13:21:29.028687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:40.673 [2024-11-26 13:21:29.028860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.673 pt3 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.673 "name": "raid_bdev1", 00:08:40.673 "uuid": "bab361b1-58cd-4b05-81aa-e8d9ba0642a4", 00:08:40.673 "strip_size_kb": 64, 00:08:40.673 "state": "online", 00:08:40.673 "raid_level": "concat", 00:08:40.673 "superblock": true, 00:08:40.673 "num_base_bdevs": 3, 00:08:40.673 "num_base_bdevs_discovered": 3, 00:08:40.673 "num_base_bdevs_operational": 3, 00:08:40.673 "base_bdevs_list": [ 00:08:40.673 { 00:08:40.673 "name": "pt1", 00:08:40.673 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:40.673 "is_configured": true, 00:08:40.673 "data_offset": 2048, 00:08:40.673 "data_size": 63488 00:08:40.673 }, 00:08:40.673 { 00:08:40.673 "name": "pt2", 00:08:40.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.673 "is_configured": true, 00:08:40.673 "data_offset": 2048, 00:08:40.673 "data_size": 63488 00:08:40.673 }, 00:08:40.673 { 00:08:40.673 "name": "pt3", 00:08:40.673 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:40.673 "is_configured": true, 00:08:40.673 "data_offset": 2048, 00:08:40.673 "data_size": 63488 00:08:40.673 } 00:08:40.673 ] 00:08:40.673 }' 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.673 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.238 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:41.238 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:41.238 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.238 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.238 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.238 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.238 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:41.238 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.238 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.238 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.238 [2024-11-26 13:21:29.555788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.238 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.238 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.238 "name": "raid_bdev1", 00:08:41.238 "aliases": [ 00:08:41.238 "bab361b1-58cd-4b05-81aa-e8d9ba0642a4" 00:08:41.238 ], 00:08:41.238 "product_name": "Raid Volume", 00:08:41.238 "block_size": 512, 00:08:41.238 "num_blocks": 190464, 00:08:41.238 "uuid": "bab361b1-58cd-4b05-81aa-e8d9ba0642a4", 00:08:41.238 "assigned_rate_limits": { 00:08:41.238 "rw_ios_per_sec": 0, 00:08:41.238 "rw_mbytes_per_sec": 0, 00:08:41.238 "r_mbytes_per_sec": 0, 00:08:41.238 "w_mbytes_per_sec": 0 00:08:41.238 }, 00:08:41.238 "claimed": false, 00:08:41.238 "zoned": false, 00:08:41.238 "supported_io_types": { 00:08:41.238 "read": true, 00:08:41.238 "write": true, 00:08:41.238 "unmap": true, 00:08:41.238 "flush": true, 00:08:41.238 "reset": true, 00:08:41.238 "nvme_admin": false, 00:08:41.238 "nvme_io": false, 00:08:41.238 "nvme_io_md": false, 00:08:41.238 "write_zeroes": true, 00:08:41.238 "zcopy": false, 00:08:41.238 "get_zone_info": false, 00:08:41.238 "zone_management": false, 00:08:41.238 "zone_append": false, 00:08:41.238 "compare": false, 00:08:41.238 "compare_and_write": false, 00:08:41.238 "abort": false, 00:08:41.239 "seek_hole": false, 00:08:41.239 "seek_data": false, 00:08:41.239 "copy": false, 00:08:41.239 "nvme_iov_md": false 00:08:41.239 }, 00:08:41.239 "memory_domains": [ 00:08:41.239 { 00:08:41.239 "dma_device_id": "system", 00:08:41.239 "dma_device_type": 1 00:08:41.239 }, 00:08:41.239 { 00:08:41.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.239 "dma_device_type": 2 00:08:41.239 }, 00:08:41.239 { 00:08:41.239 "dma_device_id": "system", 00:08:41.239 "dma_device_type": 1 00:08:41.239 }, 00:08:41.239 { 00:08:41.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.239 "dma_device_type": 2 00:08:41.239 }, 00:08:41.239 { 00:08:41.239 "dma_device_id": "system", 00:08:41.239 "dma_device_type": 1 00:08:41.239 }, 00:08:41.239 { 00:08:41.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.239 "dma_device_type": 2 00:08:41.239 } 00:08:41.239 ], 00:08:41.239 "driver_specific": { 00:08:41.239 "raid": { 00:08:41.239 "uuid": "bab361b1-58cd-4b05-81aa-e8d9ba0642a4", 00:08:41.239 "strip_size_kb": 64, 00:08:41.239 "state": "online", 00:08:41.239 "raid_level": "concat", 00:08:41.239 "superblock": true, 00:08:41.239 "num_base_bdevs": 3, 00:08:41.239 "num_base_bdevs_discovered": 3, 00:08:41.239 "num_base_bdevs_operational": 3, 00:08:41.239 "base_bdevs_list": [ 00:08:41.239 { 00:08:41.239 "name": "pt1", 00:08:41.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.239 "is_configured": true, 00:08:41.239 "data_offset": 2048, 00:08:41.239 "data_size": 63488 00:08:41.239 }, 00:08:41.239 { 00:08:41.239 "name": "pt2", 00:08:41.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.239 "is_configured": true, 00:08:41.239 "data_offset": 2048, 00:08:41.239 "data_size": 63488 00:08:41.239 }, 00:08:41.239 { 00:08:41.239 "name": "pt3", 00:08:41.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:41.239 "is_configured": true, 00:08:41.239 "data_offset": 2048, 00:08:41.239 "data_size": 63488 00:08:41.239 } 00:08:41.239 ] 00:08:41.239 } 00:08:41.239 } 00:08:41.239 }' 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:41.239 pt2 00:08:41.239 pt3' 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.239 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.498 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.498 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.498 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.498 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:41.498 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.499 [2024-11-26 13:21:29.875884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bab361b1-58cd-4b05-81aa-e8d9ba0642a4 '!=' bab361b1-58cd-4b05-81aa-e8d9ba0642a4 ']' 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66381 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66381 ']' 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66381 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66381 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.499 killing process with pid 66381 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66381' 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66381 00:08:41.499 [2024-11-26 13:21:29.953744] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.499 [2024-11-26 13:21:29.953839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.499 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66381 00:08:41.499 [2024-11-26 13:21:29.953895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.499 [2024-11-26 13:21:29.953912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:41.757 [2024-11-26 13:21:30.155812] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.700 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:42.700 00:08:42.700 real 0m5.350s 00:08:42.700 user 0m8.221s 00:08:42.700 sys 0m0.826s 00:08:42.700 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.700 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.700 ************************************ 00:08:42.700 END TEST raid_superblock_test 00:08:42.700 ************************************ 00:08:42.700 13:21:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:42.700 13:21:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:42.700 13:21:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.700 13:21:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.700 ************************************ 00:08:42.700 START TEST raid_read_error_test 00:08:42.700 ************************************ 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.E8ZgVswaAK 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66634 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66634 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66634 ']' 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:42.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.700 13:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.700 [2024-11-26 13:21:31.170538] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:08:42.700 [2024-11-26 13:21:31.170729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66634 ] 00:08:42.958 [2024-11-26 13:21:31.352481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.958 [2024-11-26 13:21:31.451376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.217 [2024-11-26 13:21:31.621355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.217 [2024-11-26 13:21:31.621402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.785 BaseBdev1_malloc 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.785 true 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.785 [2024-11-26 13:21:32.168676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:43.785 [2024-11-26 13:21:32.168755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.785 [2024-11-26 13:21:32.168780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:43.785 [2024-11-26 13:21:32.168795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.785 [2024-11-26 13:21:32.171147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.785 [2024-11-26 13:21:32.171191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:43.785 BaseBdev1 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.785 BaseBdev2_malloc 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.785 true 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.785 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.785 [2024-11-26 13:21:32.218121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:43.786 [2024-11-26 13:21:32.218200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.786 [2024-11-26 13:21:32.218223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:43.786 [2024-11-26 13:21:32.218237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.786 [2024-11-26 13:21:32.220593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.786 [2024-11-26 13:21:32.220634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:43.786 BaseBdev2 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.786 BaseBdev3_malloc 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.786 true 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.786 [2024-11-26 13:21:32.276965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:43.786 [2024-11-26 13:21:32.277036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.786 [2024-11-26 13:21:32.277059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:43.786 [2024-11-26 13:21:32.277075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.786 [2024-11-26 13:21:32.279509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.786 [2024-11-26 13:21:32.279550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:43.786 BaseBdev3 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.786 [2024-11-26 13:21:32.285045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.786 [2024-11-26 13:21:32.287118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.786 [2024-11-26 13:21:32.287214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.786 [2024-11-26 13:21:32.287483] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:43.786 [2024-11-26 13:21:32.287500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:43.786 [2024-11-26 13:21:32.287792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:43.786 [2024-11-26 13:21:32.287986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:43.786 [2024-11-26 13:21:32.288012] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:43.786 [2024-11-26 13:21:32.288171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.786 "name": "raid_bdev1", 00:08:43.786 "uuid": "bf5e68df-9072-464f-acc5-b982576d9346", 00:08:43.786 "strip_size_kb": 64, 00:08:43.786 "state": "online", 00:08:43.786 "raid_level": "concat", 00:08:43.786 "superblock": true, 00:08:43.786 "num_base_bdevs": 3, 00:08:43.786 "num_base_bdevs_discovered": 3, 00:08:43.786 "num_base_bdevs_operational": 3, 00:08:43.786 "base_bdevs_list": [ 00:08:43.786 { 00:08:43.786 "name": "BaseBdev1", 00:08:43.786 "uuid": "bd23fdc6-8d65-5002-aa89-cfa09bc08d6a", 00:08:43.786 "is_configured": true, 00:08:43.786 "data_offset": 2048, 00:08:43.786 "data_size": 63488 00:08:43.786 }, 00:08:43.786 { 00:08:43.786 "name": "BaseBdev2", 00:08:43.786 "uuid": "93375170-8322-5725-a5ee-5970fcbfcf64", 00:08:43.786 "is_configured": true, 00:08:43.786 "data_offset": 2048, 00:08:43.786 "data_size": 63488 00:08:43.786 }, 00:08:43.786 { 00:08:43.786 "name": "BaseBdev3", 00:08:43.786 "uuid": "12ebc886-eae1-5ac6-a7af-a80bc33de41c", 00:08:43.786 "is_configured": true, 00:08:43.786 "data_offset": 2048, 00:08:43.786 "data_size": 63488 00:08:43.786 } 00:08:43.786 ] 00:08:43.786 }' 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.786 13:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.354 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:44.354 13:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:44.613 [2024-11-26 13:21:32.930215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.549 "name": "raid_bdev1", 00:08:45.549 "uuid": "bf5e68df-9072-464f-acc5-b982576d9346", 00:08:45.549 "strip_size_kb": 64, 00:08:45.549 "state": "online", 00:08:45.549 "raid_level": "concat", 00:08:45.549 "superblock": true, 00:08:45.549 "num_base_bdevs": 3, 00:08:45.549 "num_base_bdevs_discovered": 3, 00:08:45.549 "num_base_bdevs_operational": 3, 00:08:45.549 "base_bdevs_list": [ 00:08:45.549 { 00:08:45.549 "name": "BaseBdev1", 00:08:45.549 "uuid": "bd23fdc6-8d65-5002-aa89-cfa09bc08d6a", 00:08:45.549 "is_configured": true, 00:08:45.549 "data_offset": 2048, 00:08:45.549 "data_size": 63488 00:08:45.549 }, 00:08:45.549 { 00:08:45.549 "name": "BaseBdev2", 00:08:45.549 "uuid": "93375170-8322-5725-a5ee-5970fcbfcf64", 00:08:45.549 "is_configured": true, 00:08:45.549 "data_offset": 2048, 00:08:45.549 "data_size": 63488 00:08:45.549 }, 00:08:45.549 { 00:08:45.549 "name": "BaseBdev3", 00:08:45.549 "uuid": "12ebc886-eae1-5ac6-a7af-a80bc33de41c", 00:08:45.549 "is_configured": true, 00:08:45.549 "data_offset": 2048, 00:08:45.549 "data_size": 63488 00:08:45.549 } 00:08:45.549 ] 00:08:45.549 }' 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.549 13:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.809 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:45.809 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.809 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.809 [2024-11-26 13:21:34.342134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:45.809 [2024-11-26 13:21:34.342202] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.809 [2024-11-26 13:21:34.345065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.809 [2024-11-26 13:21:34.345121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.809 [2024-11-26 13:21:34.345166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.809 [2024-11-26 13:21:34.345181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:45.809 { 00:08:45.809 "results": [ 00:08:45.809 { 00:08:45.809 "job": "raid_bdev1", 00:08:45.809 "core_mask": "0x1", 00:08:45.809 "workload": "randrw", 00:08:45.809 "percentage": 50, 00:08:45.809 "status": "finished", 00:08:45.809 "queue_depth": 1, 00:08:45.809 "io_size": 131072, 00:08:45.809 "runtime": 1.409835, 00:08:45.809 "iops": 13776.080179595485, 00:08:45.809 "mibps": 1722.0100224494356, 00:08:45.809 "io_failed": 1, 00:08:45.809 "io_timeout": 0, 00:08:45.809 "avg_latency_us": 101.134833772519, 00:08:45.809 "min_latency_us": 33.512727272727275, 00:08:45.809 "max_latency_us": 1407.5345454545454 00:08:45.809 } 00:08:45.809 ], 00:08:45.809 "core_count": 1 00:08:45.809 } 00:08:45.809 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.809 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66634 00:08:45.809 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66634 ']' 00:08:45.809 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66634 00:08:45.809 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:45.809 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.809 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66634 00:08:46.068 killing process with pid 66634 00:08:46.068 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.068 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.068 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66634' 00:08:46.068 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66634 00:08:46.068 [2024-11-26 13:21:34.381835] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.068 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66634 00:08:46.068 [2024-11-26 13:21:34.536911] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.005 13:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.E8ZgVswaAK 00:08:47.005 13:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:47.005 13:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:47.005 13:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:47.005 13:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:47.005 13:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.005 13:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:47.005 13:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:47.005 00:08:47.005 real 0m4.373s 00:08:47.005 user 0m5.506s 00:08:47.005 sys 0m0.577s 00:08:47.005 13:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.005 13:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.005 ************************************ 00:08:47.005 END TEST raid_read_error_test 00:08:47.005 ************************************ 00:08:47.005 13:21:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:47.005 13:21:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:47.005 13:21:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.005 13:21:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.005 ************************************ 00:08:47.005 START TEST raid_write_error_test 00:08:47.005 ************************************ 00:08:47.005 13:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:08:47.005 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:47.005 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:47.005 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:47.005 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:47.005 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.005 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:47.005 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.005 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.005 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:47.005 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5XZw70JNKM 00:08:47.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66774 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66774 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 66774 ']' 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.006 13:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.264 [2024-11-26 13:21:35.601318] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:08:47.264 [2024-11-26 13:21:35.601511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66774 ] 00:08:47.264 [2024-11-26 13:21:35.783221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.523 [2024-11-26 13:21:35.880125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.523 [2024-11-26 13:21:36.046816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.523 [2024-11-26 13:21:36.046859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.091 BaseBdev1_malloc 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.091 true 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.091 [2024-11-26 13:21:36.548035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:48.091 [2024-11-26 13:21:36.548125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.091 [2024-11-26 13:21:36.548152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:48.091 [2024-11-26 13:21:36.548169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.091 [2024-11-26 13:21:36.551085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.091 [2024-11-26 13:21:36.551126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:48.091 BaseBdev1 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.091 BaseBdev2_malloc 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.091 true 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.091 [2024-11-26 13:21:36.597854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:48.091 [2024-11-26 13:21:36.597906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.091 [2024-11-26 13:21:36.597926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:48.091 [2024-11-26 13:21:36.597940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.091 [2024-11-26 13:21:36.600210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.091 [2024-11-26 13:21:36.600257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:48.091 BaseBdev2 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.091 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.350 BaseBdev3_malloc 00:08:48.350 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.350 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:48.350 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.350 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.350 true 00:08:48.350 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.350 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:48.350 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.350 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.350 [2024-11-26 13:21:36.669759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:48.350 [2024-11-26 13:21:36.669808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.350 [2024-11-26 13:21:36.669830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:48.350 [2024-11-26 13:21:36.669845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.350 [2024-11-26 13:21:36.672168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.350 [2024-11-26 13:21:36.672207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:48.350 BaseBdev3 00:08:48.350 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.350 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:48.350 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.350 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.350 [2024-11-26 13:21:36.677837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.350 [2024-11-26 13:21:36.679870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.350 [2024-11-26 13:21:36.679962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.350 [2024-11-26 13:21:36.680180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:48.350 [2024-11-26 13:21:36.680197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.350 [2024-11-26 13:21:36.680512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:48.350 [2024-11-26 13:21:36.680701] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:48.350 [2024-11-26 13:21:36.680728] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:48.350 [2024-11-26 13:21:36.680900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.350 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.351 "name": "raid_bdev1", 00:08:48.351 "uuid": "315459cb-1e5e-4061-b08e-d79834538d38", 00:08:48.351 "strip_size_kb": 64, 00:08:48.351 "state": "online", 00:08:48.351 "raid_level": "concat", 00:08:48.351 "superblock": true, 00:08:48.351 "num_base_bdevs": 3, 00:08:48.351 "num_base_bdevs_discovered": 3, 00:08:48.351 "num_base_bdevs_operational": 3, 00:08:48.351 "base_bdevs_list": [ 00:08:48.351 { 00:08:48.351 "name": "BaseBdev1", 00:08:48.351 "uuid": "1b71a619-c636-5f5c-8bda-9fe13bef134b", 00:08:48.351 "is_configured": true, 00:08:48.351 "data_offset": 2048, 00:08:48.351 "data_size": 63488 00:08:48.351 }, 00:08:48.351 { 00:08:48.351 "name": "BaseBdev2", 00:08:48.351 "uuid": "d155a746-d864-5b54-8ad8-3996775abf13", 00:08:48.351 "is_configured": true, 00:08:48.351 "data_offset": 2048, 00:08:48.351 "data_size": 63488 00:08:48.351 }, 00:08:48.351 { 00:08:48.351 "name": "BaseBdev3", 00:08:48.351 "uuid": "a9402860-c093-5b61-8669-f8e9f4c653d2", 00:08:48.351 "is_configured": true, 00:08:48.351 "data_offset": 2048, 00:08:48.351 "data_size": 63488 00:08:48.351 } 00:08:48.351 ] 00:08:48.351 }' 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.351 13:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.918 13:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:48.918 13:21:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:48.918 [2024-11-26 13:21:37.319049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.854 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.854 "name": "raid_bdev1", 00:08:49.854 "uuid": "315459cb-1e5e-4061-b08e-d79834538d38", 00:08:49.854 "strip_size_kb": 64, 00:08:49.854 "state": "online", 00:08:49.854 "raid_level": "concat", 00:08:49.854 "superblock": true, 00:08:49.854 "num_base_bdevs": 3, 00:08:49.854 "num_base_bdevs_discovered": 3, 00:08:49.854 "num_base_bdevs_operational": 3, 00:08:49.855 "base_bdevs_list": [ 00:08:49.855 { 00:08:49.855 "name": "BaseBdev1", 00:08:49.855 "uuid": "1b71a619-c636-5f5c-8bda-9fe13bef134b", 00:08:49.855 "is_configured": true, 00:08:49.855 "data_offset": 2048, 00:08:49.855 "data_size": 63488 00:08:49.855 }, 00:08:49.855 { 00:08:49.855 "name": "BaseBdev2", 00:08:49.855 "uuid": "d155a746-d864-5b54-8ad8-3996775abf13", 00:08:49.855 "is_configured": true, 00:08:49.855 "data_offset": 2048, 00:08:49.855 "data_size": 63488 00:08:49.855 }, 00:08:49.855 { 00:08:49.855 "name": "BaseBdev3", 00:08:49.855 "uuid": "a9402860-c093-5b61-8669-f8e9f4c653d2", 00:08:49.855 "is_configured": true, 00:08:49.855 "data_offset": 2048, 00:08:49.855 "data_size": 63488 00:08:49.855 } 00:08:49.855 ] 00:08:49.855 }' 00:08:49.855 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.855 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.422 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:50.422 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.422 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.422 [2024-11-26 13:21:38.730911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.422 [2024-11-26 13:21:38.730954] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.422 [2024-11-26 13:21:38.733545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.422 [2024-11-26 13:21:38.733597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.422 [2024-11-26 13:21:38.733643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.422 [2024-11-26 13:21:38.733658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:50.422 { 00:08:50.422 "results": [ 00:08:50.422 { 00:08:50.422 "job": "raid_bdev1", 00:08:50.422 "core_mask": "0x1", 00:08:50.422 "workload": "randrw", 00:08:50.422 "percentage": 50, 00:08:50.422 "status": "finished", 00:08:50.422 "queue_depth": 1, 00:08:50.422 "io_size": 131072, 00:08:50.422 "runtime": 1.409861, 00:08:50.422 "iops": 13887.184623164978, 00:08:50.422 "mibps": 1735.8980778956222, 00:08:50.422 "io_failed": 1, 00:08:50.422 "io_timeout": 0, 00:08:50.422 "avg_latency_us": 100.29021673321571, 00:08:50.422 "min_latency_us": 33.512727272727275, 00:08:50.422 "max_latency_us": 1645.8472727272726 00:08:50.422 } 00:08:50.422 ], 00:08:50.422 "core_count": 1 00:08:50.422 } 00:08:50.422 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.422 13:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66774 00:08:50.422 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 66774 ']' 00:08:50.422 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 66774 00:08:50.422 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:50.422 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.422 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66774 00:08:50.422 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.422 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.422 killing process with pid 66774 00:08:50.422 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66774' 00:08:50.422 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 66774 00:08:50.422 [2024-11-26 13:21:38.768660] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.422 13:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 66774 00:08:50.422 [2024-11-26 13:21:38.924062] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.358 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:51.358 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5XZw70JNKM 00:08:51.358 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:51.358 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:51.358 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:51.358 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.358 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:51.358 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:51.358 00:08:51.358 real 0m4.324s 00:08:51.358 user 0m5.417s 00:08:51.358 sys 0m0.547s 00:08:51.358 13:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.358 13:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.358 ************************************ 00:08:51.358 END TEST raid_write_error_test 00:08:51.358 ************************************ 00:08:51.358 13:21:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:51.358 13:21:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:51.358 13:21:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:51.358 13:21:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.358 13:21:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.358 ************************************ 00:08:51.358 START TEST raid_state_function_test 00:08:51.359 ************************************ 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=66918 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66918' 00:08:51.359 Process raid pid: 66918 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 66918 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 66918 ']' 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.359 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.618 [2024-11-26 13:21:39.980069] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:08:51.618 [2024-11-26 13:21:39.980273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.618 [2024-11-26 13:21:40.164334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.876 [2024-11-26 13:21:40.262310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.876 [2024-11-26 13:21:40.431104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.876 [2024-11-26 13:21:40.431148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.444 [2024-11-26 13:21:40.915380] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.444 [2024-11-26 13:21:40.915442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.444 [2024-11-26 13:21:40.915457] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.444 [2024-11-26 13:21:40.915470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.444 [2024-11-26 13:21:40.915478] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:52.444 [2024-11-26 13:21:40.915490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.444 "name": "Existed_Raid", 00:08:52.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.444 "strip_size_kb": 0, 00:08:52.444 "state": "configuring", 00:08:52.444 "raid_level": "raid1", 00:08:52.444 "superblock": false, 00:08:52.444 "num_base_bdevs": 3, 00:08:52.444 "num_base_bdevs_discovered": 0, 00:08:52.444 "num_base_bdevs_operational": 3, 00:08:52.444 "base_bdevs_list": [ 00:08:52.444 { 00:08:52.444 "name": "BaseBdev1", 00:08:52.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.444 "is_configured": false, 00:08:52.444 "data_offset": 0, 00:08:52.444 "data_size": 0 00:08:52.444 }, 00:08:52.444 { 00:08:52.444 "name": "BaseBdev2", 00:08:52.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.444 "is_configured": false, 00:08:52.444 "data_offset": 0, 00:08:52.444 "data_size": 0 00:08:52.444 }, 00:08:52.444 { 00:08:52.444 "name": "BaseBdev3", 00:08:52.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.444 "is_configured": false, 00:08:52.444 "data_offset": 0, 00:08:52.444 "data_size": 0 00:08:52.444 } 00:08:52.444 ] 00:08:52.444 }' 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.444 13:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.012 [2024-11-26 13:21:41.435456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.012 [2024-11-26 13:21:41.435489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.012 [2024-11-26 13:21:41.443454] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.012 [2024-11-26 13:21:41.443493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.012 [2024-11-26 13:21:41.443505] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.012 [2024-11-26 13:21:41.443518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.012 [2024-11-26 13:21:41.443527] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.012 [2024-11-26 13:21:41.443539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.012 [2024-11-26 13:21:41.481741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.012 BaseBdev1 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.012 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.012 [ 00:08:53.012 { 00:08:53.012 "name": "BaseBdev1", 00:08:53.012 "aliases": [ 00:08:53.012 "3d3b1904-39f0-4bd3-a192-61d9d581ceb9" 00:08:53.012 ], 00:08:53.012 "product_name": "Malloc disk", 00:08:53.012 "block_size": 512, 00:08:53.013 "num_blocks": 65536, 00:08:53.013 "uuid": "3d3b1904-39f0-4bd3-a192-61d9d581ceb9", 00:08:53.013 "assigned_rate_limits": { 00:08:53.013 "rw_ios_per_sec": 0, 00:08:53.013 "rw_mbytes_per_sec": 0, 00:08:53.013 "r_mbytes_per_sec": 0, 00:08:53.013 "w_mbytes_per_sec": 0 00:08:53.013 }, 00:08:53.013 "claimed": true, 00:08:53.013 "claim_type": "exclusive_write", 00:08:53.013 "zoned": false, 00:08:53.013 "supported_io_types": { 00:08:53.013 "read": true, 00:08:53.013 "write": true, 00:08:53.013 "unmap": true, 00:08:53.013 "flush": true, 00:08:53.013 "reset": true, 00:08:53.013 "nvme_admin": false, 00:08:53.013 "nvme_io": false, 00:08:53.013 "nvme_io_md": false, 00:08:53.013 "write_zeroes": true, 00:08:53.013 "zcopy": true, 00:08:53.013 "get_zone_info": false, 00:08:53.013 "zone_management": false, 00:08:53.013 "zone_append": false, 00:08:53.013 "compare": false, 00:08:53.013 "compare_and_write": false, 00:08:53.013 "abort": true, 00:08:53.013 "seek_hole": false, 00:08:53.013 "seek_data": false, 00:08:53.013 "copy": true, 00:08:53.013 "nvme_iov_md": false 00:08:53.013 }, 00:08:53.013 "memory_domains": [ 00:08:53.013 { 00:08:53.013 "dma_device_id": "system", 00:08:53.013 "dma_device_type": 1 00:08:53.013 }, 00:08:53.013 { 00:08:53.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.013 "dma_device_type": 2 00:08:53.013 } 00:08:53.013 ], 00:08:53.013 "driver_specific": {} 00:08:53.013 } 00:08:53.013 ] 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.013 "name": "Existed_Raid", 00:08:53.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.013 "strip_size_kb": 0, 00:08:53.013 "state": "configuring", 00:08:53.013 "raid_level": "raid1", 00:08:53.013 "superblock": false, 00:08:53.013 "num_base_bdevs": 3, 00:08:53.013 "num_base_bdevs_discovered": 1, 00:08:53.013 "num_base_bdevs_operational": 3, 00:08:53.013 "base_bdevs_list": [ 00:08:53.013 { 00:08:53.013 "name": "BaseBdev1", 00:08:53.013 "uuid": "3d3b1904-39f0-4bd3-a192-61d9d581ceb9", 00:08:53.013 "is_configured": true, 00:08:53.013 "data_offset": 0, 00:08:53.013 "data_size": 65536 00:08:53.013 }, 00:08:53.013 { 00:08:53.013 "name": "BaseBdev2", 00:08:53.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.013 "is_configured": false, 00:08:53.013 "data_offset": 0, 00:08:53.013 "data_size": 0 00:08:53.013 }, 00:08:53.013 { 00:08:53.013 "name": "BaseBdev3", 00:08:53.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.013 "is_configured": false, 00:08:53.013 "data_offset": 0, 00:08:53.013 "data_size": 0 00:08:53.013 } 00:08:53.013 ] 00:08:53.013 }' 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.013 13:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.582 [2024-11-26 13:21:42.009858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.582 [2024-11-26 13:21:42.009905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.582 [2024-11-26 13:21:42.017916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.582 [2024-11-26 13:21:42.019931] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.582 [2024-11-26 13:21:42.019973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.582 [2024-11-26 13:21:42.019986] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.582 [2024-11-26 13:21:42.019998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.582 "name": "Existed_Raid", 00:08:53.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.582 "strip_size_kb": 0, 00:08:53.582 "state": "configuring", 00:08:53.582 "raid_level": "raid1", 00:08:53.582 "superblock": false, 00:08:53.582 "num_base_bdevs": 3, 00:08:53.582 "num_base_bdevs_discovered": 1, 00:08:53.582 "num_base_bdevs_operational": 3, 00:08:53.582 "base_bdevs_list": [ 00:08:53.582 { 00:08:53.582 "name": "BaseBdev1", 00:08:53.582 "uuid": "3d3b1904-39f0-4bd3-a192-61d9d581ceb9", 00:08:53.582 "is_configured": true, 00:08:53.582 "data_offset": 0, 00:08:53.582 "data_size": 65536 00:08:53.582 }, 00:08:53.582 { 00:08:53.582 "name": "BaseBdev2", 00:08:53.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.582 "is_configured": false, 00:08:53.582 "data_offset": 0, 00:08:53.582 "data_size": 0 00:08:53.582 }, 00:08:53.582 { 00:08:53.582 "name": "BaseBdev3", 00:08:53.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.582 "is_configured": false, 00:08:53.582 "data_offset": 0, 00:08:53.582 "data_size": 0 00:08:53.582 } 00:08:53.582 ] 00:08:53.582 }' 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.582 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.151 [2024-11-26 13:21:42.571434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.151 BaseBdev2 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.151 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.151 [ 00:08:54.151 { 00:08:54.151 "name": "BaseBdev2", 00:08:54.151 "aliases": [ 00:08:54.151 "126215cf-1f13-4a94-8fa4-fb93e340f29d" 00:08:54.151 ], 00:08:54.151 "product_name": "Malloc disk", 00:08:54.151 "block_size": 512, 00:08:54.151 "num_blocks": 65536, 00:08:54.151 "uuid": "126215cf-1f13-4a94-8fa4-fb93e340f29d", 00:08:54.151 "assigned_rate_limits": { 00:08:54.151 "rw_ios_per_sec": 0, 00:08:54.151 "rw_mbytes_per_sec": 0, 00:08:54.151 "r_mbytes_per_sec": 0, 00:08:54.151 "w_mbytes_per_sec": 0 00:08:54.151 }, 00:08:54.151 "claimed": true, 00:08:54.151 "claim_type": "exclusive_write", 00:08:54.151 "zoned": false, 00:08:54.151 "supported_io_types": { 00:08:54.151 "read": true, 00:08:54.151 "write": true, 00:08:54.151 "unmap": true, 00:08:54.151 "flush": true, 00:08:54.151 "reset": true, 00:08:54.151 "nvme_admin": false, 00:08:54.151 "nvme_io": false, 00:08:54.151 "nvme_io_md": false, 00:08:54.151 "write_zeroes": true, 00:08:54.151 "zcopy": true, 00:08:54.152 "get_zone_info": false, 00:08:54.152 "zone_management": false, 00:08:54.152 "zone_append": false, 00:08:54.152 "compare": false, 00:08:54.152 "compare_and_write": false, 00:08:54.152 "abort": true, 00:08:54.152 "seek_hole": false, 00:08:54.152 "seek_data": false, 00:08:54.152 "copy": true, 00:08:54.152 "nvme_iov_md": false 00:08:54.152 }, 00:08:54.152 "memory_domains": [ 00:08:54.152 { 00:08:54.152 "dma_device_id": "system", 00:08:54.152 "dma_device_type": 1 00:08:54.152 }, 00:08:54.152 { 00:08:54.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.152 "dma_device_type": 2 00:08:54.152 } 00:08:54.152 ], 00:08:54.152 "driver_specific": {} 00:08:54.152 } 00:08:54.152 ] 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.152 "name": "Existed_Raid", 00:08:54.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.152 "strip_size_kb": 0, 00:08:54.152 "state": "configuring", 00:08:54.152 "raid_level": "raid1", 00:08:54.152 "superblock": false, 00:08:54.152 "num_base_bdevs": 3, 00:08:54.152 "num_base_bdevs_discovered": 2, 00:08:54.152 "num_base_bdevs_operational": 3, 00:08:54.152 "base_bdevs_list": [ 00:08:54.152 { 00:08:54.152 "name": "BaseBdev1", 00:08:54.152 "uuid": "3d3b1904-39f0-4bd3-a192-61d9d581ceb9", 00:08:54.152 "is_configured": true, 00:08:54.152 "data_offset": 0, 00:08:54.152 "data_size": 65536 00:08:54.152 }, 00:08:54.152 { 00:08:54.152 "name": "BaseBdev2", 00:08:54.152 "uuid": "126215cf-1f13-4a94-8fa4-fb93e340f29d", 00:08:54.152 "is_configured": true, 00:08:54.152 "data_offset": 0, 00:08:54.152 "data_size": 65536 00:08:54.152 }, 00:08:54.152 { 00:08:54.152 "name": "BaseBdev3", 00:08:54.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.152 "is_configured": false, 00:08:54.152 "data_offset": 0, 00:08:54.152 "data_size": 0 00:08:54.152 } 00:08:54.152 ] 00:08:54.152 }' 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.152 13:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.719 [2024-11-26 13:21:43.183778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:54.719 [2024-11-26 13:21:43.183822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:54.719 [2024-11-26 13:21:43.183838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:54.719 [2024-11-26 13:21:43.184124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:54.719 [2024-11-26 13:21:43.184332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:54.719 [2024-11-26 13:21:43.184347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:54.719 [2024-11-26 13:21:43.184652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.719 BaseBdev3 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.719 [ 00:08:54.719 { 00:08:54.719 "name": "BaseBdev3", 00:08:54.719 "aliases": [ 00:08:54.719 "4d26194d-0be9-4bb0-9d68-fa404f8ab2c8" 00:08:54.719 ], 00:08:54.719 "product_name": "Malloc disk", 00:08:54.719 "block_size": 512, 00:08:54.719 "num_blocks": 65536, 00:08:54.719 "uuid": "4d26194d-0be9-4bb0-9d68-fa404f8ab2c8", 00:08:54.719 "assigned_rate_limits": { 00:08:54.719 "rw_ios_per_sec": 0, 00:08:54.719 "rw_mbytes_per_sec": 0, 00:08:54.719 "r_mbytes_per_sec": 0, 00:08:54.719 "w_mbytes_per_sec": 0 00:08:54.719 }, 00:08:54.719 "claimed": true, 00:08:54.719 "claim_type": "exclusive_write", 00:08:54.719 "zoned": false, 00:08:54.719 "supported_io_types": { 00:08:54.719 "read": true, 00:08:54.719 "write": true, 00:08:54.719 "unmap": true, 00:08:54.719 "flush": true, 00:08:54.719 "reset": true, 00:08:54.719 "nvme_admin": false, 00:08:54.719 "nvme_io": false, 00:08:54.719 "nvme_io_md": false, 00:08:54.719 "write_zeroes": true, 00:08:54.719 "zcopy": true, 00:08:54.719 "get_zone_info": false, 00:08:54.719 "zone_management": false, 00:08:54.719 "zone_append": false, 00:08:54.719 "compare": false, 00:08:54.719 "compare_and_write": false, 00:08:54.719 "abort": true, 00:08:54.719 "seek_hole": false, 00:08:54.719 "seek_data": false, 00:08:54.719 "copy": true, 00:08:54.719 "nvme_iov_md": false 00:08:54.719 }, 00:08:54.719 "memory_domains": [ 00:08:54.719 { 00:08:54.719 "dma_device_id": "system", 00:08:54.719 "dma_device_type": 1 00:08:54.719 }, 00:08:54.719 { 00:08:54.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.719 "dma_device_type": 2 00:08:54.719 } 00:08:54.719 ], 00:08:54.719 "driver_specific": {} 00:08:54.719 } 00:08:54.719 ] 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.719 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.720 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.720 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.720 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.720 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.720 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.720 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.720 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.720 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.720 "name": "Existed_Raid", 00:08:54.720 "uuid": "4fd37310-894d-4e0f-a8d9-6845a204e413", 00:08:54.720 "strip_size_kb": 0, 00:08:54.720 "state": "online", 00:08:54.720 "raid_level": "raid1", 00:08:54.720 "superblock": false, 00:08:54.720 "num_base_bdevs": 3, 00:08:54.720 "num_base_bdevs_discovered": 3, 00:08:54.720 "num_base_bdevs_operational": 3, 00:08:54.720 "base_bdevs_list": [ 00:08:54.720 { 00:08:54.720 "name": "BaseBdev1", 00:08:54.720 "uuid": "3d3b1904-39f0-4bd3-a192-61d9d581ceb9", 00:08:54.720 "is_configured": true, 00:08:54.720 "data_offset": 0, 00:08:54.720 "data_size": 65536 00:08:54.720 }, 00:08:54.720 { 00:08:54.720 "name": "BaseBdev2", 00:08:54.720 "uuid": "126215cf-1f13-4a94-8fa4-fb93e340f29d", 00:08:54.720 "is_configured": true, 00:08:54.720 "data_offset": 0, 00:08:54.720 "data_size": 65536 00:08:54.720 }, 00:08:54.720 { 00:08:54.720 "name": "BaseBdev3", 00:08:54.720 "uuid": "4d26194d-0be9-4bb0-9d68-fa404f8ab2c8", 00:08:54.720 "is_configured": true, 00:08:54.720 "data_offset": 0, 00:08:54.720 "data_size": 65536 00:08:54.720 } 00:08:54.720 ] 00:08:54.720 }' 00:08:54.720 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.720 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.286 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.286 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:55.286 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.286 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.286 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.286 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.286 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:55.286 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.286 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.286 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.286 [2024-11-26 13:21:43.748190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.286 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.286 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.286 "name": "Existed_Raid", 00:08:55.286 "aliases": [ 00:08:55.286 "4fd37310-894d-4e0f-a8d9-6845a204e413" 00:08:55.286 ], 00:08:55.286 "product_name": "Raid Volume", 00:08:55.286 "block_size": 512, 00:08:55.286 "num_blocks": 65536, 00:08:55.286 "uuid": "4fd37310-894d-4e0f-a8d9-6845a204e413", 00:08:55.286 "assigned_rate_limits": { 00:08:55.287 "rw_ios_per_sec": 0, 00:08:55.287 "rw_mbytes_per_sec": 0, 00:08:55.287 "r_mbytes_per_sec": 0, 00:08:55.287 "w_mbytes_per_sec": 0 00:08:55.287 }, 00:08:55.287 "claimed": false, 00:08:55.287 "zoned": false, 00:08:55.287 "supported_io_types": { 00:08:55.287 "read": true, 00:08:55.287 "write": true, 00:08:55.287 "unmap": false, 00:08:55.287 "flush": false, 00:08:55.287 "reset": true, 00:08:55.287 "nvme_admin": false, 00:08:55.287 "nvme_io": false, 00:08:55.287 "nvme_io_md": false, 00:08:55.287 "write_zeroes": true, 00:08:55.287 "zcopy": false, 00:08:55.287 "get_zone_info": false, 00:08:55.287 "zone_management": false, 00:08:55.287 "zone_append": false, 00:08:55.287 "compare": false, 00:08:55.287 "compare_and_write": false, 00:08:55.287 "abort": false, 00:08:55.287 "seek_hole": false, 00:08:55.287 "seek_data": false, 00:08:55.287 "copy": false, 00:08:55.287 "nvme_iov_md": false 00:08:55.287 }, 00:08:55.287 "memory_domains": [ 00:08:55.287 { 00:08:55.287 "dma_device_id": "system", 00:08:55.287 "dma_device_type": 1 00:08:55.287 }, 00:08:55.287 { 00:08:55.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.287 "dma_device_type": 2 00:08:55.287 }, 00:08:55.287 { 00:08:55.287 "dma_device_id": "system", 00:08:55.287 "dma_device_type": 1 00:08:55.287 }, 00:08:55.287 { 00:08:55.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.287 "dma_device_type": 2 00:08:55.287 }, 00:08:55.287 { 00:08:55.287 "dma_device_id": "system", 00:08:55.287 "dma_device_type": 1 00:08:55.287 }, 00:08:55.287 { 00:08:55.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.287 "dma_device_type": 2 00:08:55.287 } 00:08:55.287 ], 00:08:55.287 "driver_specific": { 00:08:55.287 "raid": { 00:08:55.287 "uuid": "4fd37310-894d-4e0f-a8d9-6845a204e413", 00:08:55.287 "strip_size_kb": 0, 00:08:55.287 "state": "online", 00:08:55.287 "raid_level": "raid1", 00:08:55.287 "superblock": false, 00:08:55.287 "num_base_bdevs": 3, 00:08:55.287 "num_base_bdevs_discovered": 3, 00:08:55.287 "num_base_bdevs_operational": 3, 00:08:55.287 "base_bdevs_list": [ 00:08:55.287 { 00:08:55.287 "name": "BaseBdev1", 00:08:55.287 "uuid": "3d3b1904-39f0-4bd3-a192-61d9d581ceb9", 00:08:55.287 "is_configured": true, 00:08:55.287 "data_offset": 0, 00:08:55.287 "data_size": 65536 00:08:55.287 }, 00:08:55.287 { 00:08:55.287 "name": "BaseBdev2", 00:08:55.287 "uuid": "126215cf-1f13-4a94-8fa4-fb93e340f29d", 00:08:55.287 "is_configured": true, 00:08:55.287 "data_offset": 0, 00:08:55.287 "data_size": 65536 00:08:55.287 }, 00:08:55.287 { 00:08:55.287 "name": "BaseBdev3", 00:08:55.287 "uuid": "4d26194d-0be9-4bb0-9d68-fa404f8ab2c8", 00:08:55.287 "is_configured": true, 00:08:55.287 "data_offset": 0, 00:08:55.287 "data_size": 65536 00:08:55.287 } 00:08:55.287 ] 00:08:55.287 } 00:08:55.287 } 00:08:55.287 }' 00:08:55.287 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.287 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:55.287 BaseBdev2 00:08:55.287 BaseBdev3' 00:08:55.287 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.545 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.545 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.545 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:55.545 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.545 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.545 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.545 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.545 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.545 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.545 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.545 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.545 13:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.545 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.545 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.545 13:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.545 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.545 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.545 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.545 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:55.545 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.545 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.545 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.545 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.545 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.545 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.545 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.545 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.545 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.545 [2024-11-26 13:21:44.060014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.804 "name": "Existed_Raid", 00:08:55.804 "uuid": "4fd37310-894d-4e0f-a8d9-6845a204e413", 00:08:55.804 "strip_size_kb": 0, 00:08:55.804 "state": "online", 00:08:55.804 "raid_level": "raid1", 00:08:55.804 "superblock": false, 00:08:55.804 "num_base_bdevs": 3, 00:08:55.804 "num_base_bdevs_discovered": 2, 00:08:55.804 "num_base_bdevs_operational": 2, 00:08:55.804 "base_bdevs_list": [ 00:08:55.804 { 00:08:55.804 "name": null, 00:08:55.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.804 "is_configured": false, 00:08:55.804 "data_offset": 0, 00:08:55.804 "data_size": 65536 00:08:55.804 }, 00:08:55.804 { 00:08:55.804 "name": "BaseBdev2", 00:08:55.804 "uuid": "126215cf-1f13-4a94-8fa4-fb93e340f29d", 00:08:55.804 "is_configured": true, 00:08:55.804 "data_offset": 0, 00:08:55.804 "data_size": 65536 00:08:55.804 }, 00:08:55.804 { 00:08:55.804 "name": "BaseBdev3", 00:08:55.804 "uuid": "4d26194d-0be9-4bb0-9d68-fa404f8ab2c8", 00:08:55.804 "is_configured": true, 00:08:55.804 "data_offset": 0, 00:08:55.804 "data_size": 65536 00:08:55.804 } 00:08:55.804 ] 00:08:55.804 }' 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.804 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.372 [2024-11-26 13:21:44.704384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.372 [2024-11-26 13:21:44.833528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:56.372 [2024-11-26 13:21:44.833816] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.372 [2024-11-26 13:21:44.900336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.372 [2024-11-26 13:21:44.900382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.372 [2024-11-26 13:21:44.900399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.372 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.631 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:56.631 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:56.631 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:56.631 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:56.631 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.631 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.631 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.631 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.631 BaseBdev2 00:08:56.631 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.631 13:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:56.631 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:56.631 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.632 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:56.632 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.632 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.632 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.632 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.632 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.632 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.632 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:56.632 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.632 13:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.632 [ 00:08:56.632 { 00:08:56.632 "name": "BaseBdev2", 00:08:56.632 "aliases": [ 00:08:56.632 "747256f5-0e81-404a-94ad-a71c11e7268f" 00:08:56.632 ], 00:08:56.632 "product_name": "Malloc disk", 00:08:56.632 "block_size": 512, 00:08:56.632 "num_blocks": 65536, 00:08:56.632 "uuid": "747256f5-0e81-404a-94ad-a71c11e7268f", 00:08:56.632 "assigned_rate_limits": { 00:08:56.632 "rw_ios_per_sec": 0, 00:08:56.632 "rw_mbytes_per_sec": 0, 00:08:56.632 "r_mbytes_per_sec": 0, 00:08:56.632 "w_mbytes_per_sec": 0 00:08:56.632 }, 00:08:56.632 "claimed": false, 00:08:56.632 "zoned": false, 00:08:56.632 "supported_io_types": { 00:08:56.632 "read": true, 00:08:56.632 "write": true, 00:08:56.632 "unmap": true, 00:08:56.632 "flush": true, 00:08:56.632 "reset": true, 00:08:56.632 "nvme_admin": false, 00:08:56.632 "nvme_io": false, 00:08:56.632 "nvme_io_md": false, 00:08:56.632 "write_zeroes": true, 00:08:56.632 "zcopy": true, 00:08:56.632 "get_zone_info": false, 00:08:56.632 "zone_management": false, 00:08:56.632 "zone_append": false, 00:08:56.632 "compare": false, 00:08:56.632 "compare_and_write": false, 00:08:56.632 "abort": true, 00:08:56.632 "seek_hole": false, 00:08:56.632 "seek_data": false, 00:08:56.632 "copy": true, 00:08:56.632 "nvme_iov_md": false 00:08:56.632 }, 00:08:56.632 "memory_domains": [ 00:08:56.632 { 00:08:56.632 "dma_device_id": "system", 00:08:56.632 "dma_device_type": 1 00:08:56.632 }, 00:08:56.632 { 00:08:56.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.632 "dma_device_type": 2 00:08:56.632 } 00:08:56.632 ], 00:08:56.632 "driver_specific": {} 00:08:56.632 } 00:08:56.632 ] 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.632 BaseBdev3 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.632 [ 00:08:56.632 { 00:08:56.632 "name": "BaseBdev3", 00:08:56.632 "aliases": [ 00:08:56.632 "37bbaa80-9393-43d5-b203-5efb6321be2a" 00:08:56.632 ], 00:08:56.632 "product_name": "Malloc disk", 00:08:56.632 "block_size": 512, 00:08:56.632 "num_blocks": 65536, 00:08:56.632 "uuid": "37bbaa80-9393-43d5-b203-5efb6321be2a", 00:08:56.632 "assigned_rate_limits": { 00:08:56.632 "rw_ios_per_sec": 0, 00:08:56.632 "rw_mbytes_per_sec": 0, 00:08:56.632 "r_mbytes_per_sec": 0, 00:08:56.632 "w_mbytes_per_sec": 0 00:08:56.632 }, 00:08:56.632 "claimed": false, 00:08:56.632 "zoned": false, 00:08:56.632 "supported_io_types": { 00:08:56.632 "read": true, 00:08:56.632 "write": true, 00:08:56.632 "unmap": true, 00:08:56.632 "flush": true, 00:08:56.632 "reset": true, 00:08:56.632 "nvme_admin": false, 00:08:56.632 "nvme_io": false, 00:08:56.632 "nvme_io_md": false, 00:08:56.632 "write_zeroes": true, 00:08:56.632 "zcopy": true, 00:08:56.632 "get_zone_info": false, 00:08:56.632 "zone_management": false, 00:08:56.632 "zone_append": false, 00:08:56.632 "compare": false, 00:08:56.632 "compare_and_write": false, 00:08:56.632 "abort": true, 00:08:56.632 "seek_hole": false, 00:08:56.632 "seek_data": false, 00:08:56.632 "copy": true, 00:08:56.632 "nvme_iov_md": false 00:08:56.632 }, 00:08:56.632 "memory_domains": [ 00:08:56.632 { 00:08:56.632 "dma_device_id": "system", 00:08:56.632 "dma_device_type": 1 00:08:56.632 }, 00:08:56.632 { 00:08:56.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.632 "dma_device_type": 2 00:08:56.632 } 00:08:56.632 ], 00:08:56.632 "driver_specific": {} 00:08:56.632 } 00:08:56.632 ] 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.632 [2024-11-26 13:21:45.097204] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.632 [2024-11-26 13:21:45.097424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.632 [2024-11-26 13:21:45.097547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.632 [2024-11-26 13:21:45.099747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.632 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.632 "name": "Existed_Raid", 00:08:56.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.632 "strip_size_kb": 0, 00:08:56.632 "state": "configuring", 00:08:56.632 "raid_level": "raid1", 00:08:56.632 "superblock": false, 00:08:56.632 "num_base_bdevs": 3, 00:08:56.632 "num_base_bdevs_discovered": 2, 00:08:56.632 "num_base_bdevs_operational": 3, 00:08:56.632 "base_bdevs_list": [ 00:08:56.632 { 00:08:56.632 "name": "BaseBdev1", 00:08:56.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.632 "is_configured": false, 00:08:56.632 "data_offset": 0, 00:08:56.632 "data_size": 0 00:08:56.632 }, 00:08:56.632 { 00:08:56.632 "name": "BaseBdev2", 00:08:56.632 "uuid": "747256f5-0e81-404a-94ad-a71c11e7268f", 00:08:56.632 "is_configured": true, 00:08:56.632 "data_offset": 0, 00:08:56.633 "data_size": 65536 00:08:56.633 }, 00:08:56.633 { 00:08:56.633 "name": "BaseBdev3", 00:08:56.633 "uuid": "37bbaa80-9393-43d5-b203-5efb6321be2a", 00:08:56.633 "is_configured": true, 00:08:56.633 "data_offset": 0, 00:08:56.633 "data_size": 65536 00:08:56.633 } 00:08:56.633 ] 00:08:56.633 }' 00:08:56.633 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.633 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.200 [2024-11-26 13:21:45.609318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.200 "name": "Existed_Raid", 00:08:57.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.200 "strip_size_kb": 0, 00:08:57.200 "state": "configuring", 00:08:57.200 "raid_level": "raid1", 00:08:57.200 "superblock": false, 00:08:57.200 "num_base_bdevs": 3, 00:08:57.200 "num_base_bdevs_discovered": 1, 00:08:57.200 "num_base_bdevs_operational": 3, 00:08:57.200 "base_bdevs_list": [ 00:08:57.200 { 00:08:57.200 "name": "BaseBdev1", 00:08:57.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.200 "is_configured": false, 00:08:57.200 "data_offset": 0, 00:08:57.200 "data_size": 0 00:08:57.200 }, 00:08:57.200 { 00:08:57.200 "name": null, 00:08:57.200 "uuid": "747256f5-0e81-404a-94ad-a71c11e7268f", 00:08:57.200 "is_configured": false, 00:08:57.200 "data_offset": 0, 00:08:57.200 "data_size": 65536 00:08:57.200 }, 00:08:57.200 { 00:08:57.200 "name": "BaseBdev3", 00:08:57.200 "uuid": "37bbaa80-9393-43d5-b203-5efb6321be2a", 00:08:57.200 "is_configured": true, 00:08:57.200 "data_offset": 0, 00:08:57.200 "data_size": 65536 00:08:57.200 } 00:08:57.200 ] 00:08:57.200 }' 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.200 13:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.768 [2024-11-26 13:21:46.221634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.768 BaseBdev1 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.768 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.768 [ 00:08:57.768 { 00:08:57.768 "name": "BaseBdev1", 00:08:57.768 "aliases": [ 00:08:57.768 "758c2d79-6735-4217-9835-3e660179d5a6" 00:08:57.768 ], 00:08:57.768 "product_name": "Malloc disk", 00:08:57.768 "block_size": 512, 00:08:57.768 "num_blocks": 65536, 00:08:57.768 "uuid": "758c2d79-6735-4217-9835-3e660179d5a6", 00:08:57.768 "assigned_rate_limits": { 00:08:57.768 "rw_ios_per_sec": 0, 00:08:57.768 "rw_mbytes_per_sec": 0, 00:08:57.768 "r_mbytes_per_sec": 0, 00:08:57.769 "w_mbytes_per_sec": 0 00:08:57.769 }, 00:08:57.769 "claimed": true, 00:08:57.769 "claim_type": "exclusive_write", 00:08:57.769 "zoned": false, 00:08:57.769 "supported_io_types": { 00:08:57.769 "read": true, 00:08:57.769 "write": true, 00:08:57.769 "unmap": true, 00:08:57.769 "flush": true, 00:08:57.769 "reset": true, 00:08:57.769 "nvme_admin": false, 00:08:57.769 "nvme_io": false, 00:08:57.769 "nvme_io_md": false, 00:08:57.769 "write_zeroes": true, 00:08:57.769 "zcopy": true, 00:08:57.769 "get_zone_info": false, 00:08:57.769 "zone_management": false, 00:08:57.769 "zone_append": false, 00:08:57.769 "compare": false, 00:08:57.769 "compare_and_write": false, 00:08:57.769 "abort": true, 00:08:57.769 "seek_hole": false, 00:08:57.769 "seek_data": false, 00:08:57.769 "copy": true, 00:08:57.769 "nvme_iov_md": false 00:08:57.769 }, 00:08:57.769 "memory_domains": [ 00:08:57.769 { 00:08:57.769 "dma_device_id": "system", 00:08:57.769 "dma_device_type": 1 00:08:57.769 }, 00:08:57.769 { 00:08:57.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.769 "dma_device_type": 2 00:08:57.769 } 00:08:57.769 ], 00:08:57.769 "driver_specific": {} 00:08:57.769 } 00:08:57.769 ] 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.769 "name": "Existed_Raid", 00:08:57.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.769 "strip_size_kb": 0, 00:08:57.769 "state": "configuring", 00:08:57.769 "raid_level": "raid1", 00:08:57.769 "superblock": false, 00:08:57.769 "num_base_bdevs": 3, 00:08:57.769 "num_base_bdevs_discovered": 2, 00:08:57.769 "num_base_bdevs_operational": 3, 00:08:57.769 "base_bdevs_list": [ 00:08:57.769 { 00:08:57.769 "name": "BaseBdev1", 00:08:57.769 "uuid": "758c2d79-6735-4217-9835-3e660179d5a6", 00:08:57.769 "is_configured": true, 00:08:57.769 "data_offset": 0, 00:08:57.769 "data_size": 65536 00:08:57.769 }, 00:08:57.769 { 00:08:57.769 "name": null, 00:08:57.769 "uuid": "747256f5-0e81-404a-94ad-a71c11e7268f", 00:08:57.769 "is_configured": false, 00:08:57.769 "data_offset": 0, 00:08:57.769 "data_size": 65536 00:08:57.769 }, 00:08:57.769 { 00:08:57.769 "name": "BaseBdev3", 00:08:57.769 "uuid": "37bbaa80-9393-43d5-b203-5efb6321be2a", 00:08:57.769 "is_configured": true, 00:08:57.769 "data_offset": 0, 00:08:57.769 "data_size": 65536 00:08:57.769 } 00:08:57.769 ] 00:08:57.769 }' 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.769 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.336 [2024-11-26 13:21:46.825815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.336 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.337 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.337 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.337 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.337 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.337 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.337 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.337 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.337 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.337 "name": "Existed_Raid", 00:08:58.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.337 "strip_size_kb": 0, 00:08:58.337 "state": "configuring", 00:08:58.337 "raid_level": "raid1", 00:08:58.337 "superblock": false, 00:08:58.337 "num_base_bdevs": 3, 00:08:58.337 "num_base_bdevs_discovered": 1, 00:08:58.337 "num_base_bdevs_operational": 3, 00:08:58.337 "base_bdevs_list": [ 00:08:58.337 { 00:08:58.337 "name": "BaseBdev1", 00:08:58.337 "uuid": "758c2d79-6735-4217-9835-3e660179d5a6", 00:08:58.337 "is_configured": true, 00:08:58.337 "data_offset": 0, 00:08:58.337 "data_size": 65536 00:08:58.337 }, 00:08:58.337 { 00:08:58.337 "name": null, 00:08:58.337 "uuid": "747256f5-0e81-404a-94ad-a71c11e7268f", 00:08:58.337 "is_configured": false, 00:08:58.337 "data_offset": 0, 00:08:58.337 "data_size": 65536 00:08:58.337 }, 00:08:58.337 { 00:08:58.337 "name": null, 00:08:58.337 "uuid": "37bbaa80-9393-43d5-b203-5efb6321be2a", 00:08:58.337 "is_configured": false, 00:08:58.337 "data_offset": 0, 00:08:58.337 "data_size": 65536 00:08:58.337 } 00:08:58.337 ] 00:08:58.337 }' 00:08:58.337 13:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.337 13:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.903 [2024-11-26 13:21:47.397967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.903 "name": "Existed_Raid", 00:08:58.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.903 "strip_size_kb": 0, 00:08:58.903 "state": "configuring", 00:08:58.903 "raid_level": "raid1", 00:08:58.903 "superblock": false, 00:08:58.903 "num_base_bdevs": 3, 00:08:58.903 "num_base_bdevs_discovered": 2, 00:08:58.903 "num_base_bdevs_operational": 3, 00:08:58.903 "base_bdevs_list": [ 00:08:58.903 { 00:08:58.903 "name": "BaseBdev1", 00:08:58.903 "uuid": "758c2d79-6735-4217-9835-3e660179d5a6", 00:08:58.903 "is_configured": true, 00:08:58.903 "data_offset": 0, 00:08:58.903 "data_size": 65536 00:08:58.903 }, 00:08:58.903 { 00:08:58.903 "name": null, 00:08:58.903 "uuid": "747256f5-0e81-404a-94ad-a71c11e7268f", 00:08:58.903 "is_configured": false, 00:08:58.903 "data_offset": 0, 00:08:58.903 "data_size": 65536 00:08:58.903 }, 00:08:58.903 { 00:08:58.903 "name": "BaseBdev3", 00:08:58.903 "uuid": "37bbaa80-9393-43d5-b203-5efb6321be2a", 00:08:58.903 "is_configured": true, 00:08:58.903 "data_offset": 0, 00:08:58.903 "data_size": 65536 00:08:58.903 } 00:08:58.903 ] 00:08:58.903 }' 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.903 13:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.470 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.470 13:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.470 13:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.470 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:59.470 13:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.470 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:59.470 13:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:59.470 13:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.470 13:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.470 [2024-11-26 13:21:47.986115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.728 "name": "Existed_Raid", 00:08:59.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.728 "strip_size_kb": 0, 00:08:59.728 "state": "configuring", 00:08:59.728 "raid_level": "raid1", 00:08:59.728 "superblock": false, 00:08:59.728 "num_base_bdevs": 3, 00:08:59.728 "num_base_bdevs_discovered": 1, 00:08:59.728 "num_base_bdevs_operational": 3, 00:08:59.728 "base_bdevs_list": [ 00:08:59.728 { 00:08:59.728 "name": null, 00:08:59.728 "uuid": "758c2d79-6735-4217-9835-3e660179d5a6", 00:08:59.728 "is_configured": false, 00:08:59.728 "data_offset": 0, 00:08:59.728 "data_size": 65536 00:08:59.728 }, 00:08:59.728 { 00:08:59.728 "name": null, 00:08:59.728 "uuid": "747256f5-0e81-404a-94ad-a71c11e7268f", 00:08:59.728 "is_configured": false, 00:08:59.728 "data_offset": 0, 00:08:59.728 "data_size": 65536 00:08:59.728 }, 00:08:59.728 { 00:08:59.728 "name": "BaseBdev3", 00:08:59.728 "uuid": "37bbaa80-9393-43d5-b203-5efb6321be2a", 00:08:59.728 "is_configured": true, 00:08:59.728 "data_offset": 0, 00:08:59.728 "data_size": 65536 00:08:59.728 } 00:08:59.728 ] 00:08:59.728 }' 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.728 13:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.295 [2024-11-26 13:21:48.622816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.295 "name": "Existed_Raid", 00:09:00.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.295 "strip_size_kb": 0, 00:09:00.295 "state": "configuring", 00:09:00.295 "raid_level": "raid1", 00:09:00.295 "superblock": false, 00:09:00.295 "num_base_bdevs": 3, 00:09:00.295 "num_base_bdevs_discovered": 2, 00:09:00.295 "num_base_bdevs_operational": 3, 00:09:00.295 "base_bdevs_list": [ 00:09:00.295 { 00:09:00.295 "name": null, 00:09:00.295 "uuid": "758c2d79-6735-4217-9835-3e660179d5a6", 00:09:00.295 "is_configured": false, 00:09:00.295 "data_offset": 0, 00:09:00.295 "data_size": 65536 00:09:00.295 }, 00:09:00.295 { 00:09:00.295 "name": "BaseBdev2", 00:09:00.295 "uuid": "747256f5-0e81-404a-94ad-a71c11e7268f", 00:09:00.295 "is_configured": true, 00:09:00.295 "data_offset": 0, 00:09:00.295 "data_size": 65536 00:09:00.295 }, 00:09:00.295 { 00:09:00.295 "name": "BaseBdev3", 00:09:00.295 "uuid": "37bbaa80-9393-43d5-b203-5efb6321be2a", 00:09:00.295 "is_configured": true, 00:09:00.295 "data_offset": 0, 00:09:00.295 "data_size": 65536 00:09:00.295 } 00:09:00.295 ] 00:09:00.295 }' 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.295 13:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 758c2d79-6735-4217-9835-3e660179d5a6 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.862 [2024-11-26 13:21:49.277680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:00.862 [2024-11-26 13:21:49.277727] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:00.862 [2024-11-26 13:21:49.277737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:00.862 [2024-11-26 13:21:49.277991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:00.862 [2024-11-26 13:21:49.278165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:00.862 [2024-11-26 13:21:49.278184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:00.862 [2024-11-26 13:21:49.278500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.862 NewBaseBdev 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.862 [ 00:09:00.862 { 00:09:00.862 "name": "NewBaseBdev", 00:09:00.862 "aliases": [ 00:09:00.862 "758c2d79-6735-4217-9835-3e660179d5a6" 00:09:00.862 ], 00:09:00.862 "product_name": "Malloc disk", 00:09:00.862 "block_size": 512, 00:09:00.862 "num_blocks": 65536, 00:09:00.862 "uuid": "758c2d79-6735-4217-9835-3e660179d5a6", 00:09:00.862 "assigned_rate_limits": { 00:09:00.862 "rw_ios_per_sec": 0, 00:09:00.862 "rw_mbytes_per_sec": 0, 00:09:00.862 "r_mbytes_per_sec": 0, 00:09:00.862 "w_mbytes_per_sec": 0 00:09:00.862 }, 00:09:00.862 "claimed": true, 00:09:00.862 "claim_type": "exclusive_write", 00:09:00.862 "zoned": false, 00:09:00.862 "supported_io_types": { 00:09:00.862 "read": true, 00:09:00.862 "write": true, 00:09:00.862 "unmap": true, 00:09:00.862 "flush": true, 00:09:00.862 "reset": true, 00:09:00.862 "nvme_admin": false, 00:09:00.862 "nvme_io": false, 00:09:00.862 "nvme_io_md": false, 00:09:00.862 "write_zeroes": true, 00:09:00.862 "zcopy": true, 00:09:00.862 "get_zone_info": false, 00:09:00.862 "zone_management": false, 00:09:00.862 "zone_append": false, 00:09:00.862 "compare": false, 00:09:00.862 "compare_and_write": false, 00:09:00.862 "abort": true, 00:09:00.862 "seek_hole": false, 00:09:00.862 "seek_data": false, 00:09:00.862 "copy": true, 00:09:00.862 "nvme_iov_md": false 00:09:00.862 }, 00:09:00.862 "memory_domains": [ 00:09:00.862 { 00:09:00.862 "dma_device_id": "system", 00:09:00.862 "dma_device_type": 1 00:09:00.862 }, 00:09:00.862 { 00:09:00.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.862 "dma_device_type": 2 00:09:00.862 } 00:09:00.862 ], 00:09:00.862 "driver_specific": {} 00:09:00.862 } 00:09:00.862 ] 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.862 "name": "Existed_Raid", 00:09:00.862 "uuid": "adc40d3e-c243-4488-9d0d-acc85b68ae08", 00:09:00.862 "strip_size_kb": 0, 00:09:00.862 "state": "online", 00:09:00.862 "raid_level": "raid1", 00:09:00.862 "superblock": false, 00:09:00.862 "num_base_bdevs": 3, 00:09:00.862 "num_base_bdevs_discovered": 3, 00:09:00.862 "num_base_bdevs_operational": 3, 00:09:00.862 "base_bdevs_list": [ 00:09:00.862 { 00:09:00.862 "name": "NewBaseBdev", 00:09:00.862 "uuid": "758c2d79-6735-4217-9835-3e660179d5a6", 00:09:00.862 "is_configured": true, 00:09:00.862 "data_offset": 0, 00:09:00.862 "data_size": 65536 00:09:00.862 }, 00:09:00.862 { 00:09:00.862 "name": "BaseBdev2", 00:09:00.862 "uuid": "747256f5-0e81-404a-94ad-a71c11e7268f", 00:09:00.862 "is_configured": true, 00:09:00.862 "data_offset": 0, 00:09:00.862 "data_size": 65536 00:09:00.862 }, 00:09:00.862 { 00:09:00.862 "name": "BaseBdev3", 00:09:00.862 "uuid": "37bbaa80-9393-43d5-b203-5efb6321be2a", 00:09:00.862 "is_configured": true, 00:09:00.862 "data_offset": 0, 00:09:00.862 "data_size": 65536 00:09:00.862 } 00:09:00.862 ] 00:09:00.862 }' 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.862 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.431 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:01.431 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:01.431 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.431 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.432 [2024-11-26 13:21:49.830129] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.432 "name": "Existed_Raid", 00:09:01.432 "aliases": [ 00:09:01.432 "adc40d3e-c243-4488-9d0d-acc85b68ae08" 00:09:01.432 ], 00:09:01.432 "product_name": "Raid Volume", 00:09:01.432 "block_size": 512, 00:09:01.432 "num_blocks": 65536, 00:09:01.432 "uuid": "adc40d3e-c243-4488-9d0d-acc85b68ae08", 00:09:01.432 "assigned_rate_limits": { 00:09:01.432 "rw_ios_per_sec": 0, 00:09:01.432 "rw_mbytes_per_sec": 0, 00:09:01.432 "r_mbytes_per_sec": 0, 00:09:01.432 "w_mbytes_per_sec": 0 00:09:01.432 }, 00:09:01.432 "claimed": false, 00:09:01.432 "zoned": false, 00:09:01.432 "supported_io_types": { 00:09:01.432 "read": true, 00:09:01.432 "write": true, 00:09:01.432 "unmap": false, 00:09:01.432 "flush": false, 00:09:01.432 "reset": true, 00:09:01.432 "nvme_admin": false, 00:09:01.432 "nvme_io": false, 00:09:01.432 "nvme_io_md": false, 00:09:01.432 "write_zeroes": true, 00:09:01.432 "zcopy": false, 00:09:01.432 "get_zone_info": false, 00:09:01.432 "zone_management": false, 00:09:01.432 "zone_append": false, 00:09:01.432 "compare": false, 00:09:01.432 "compare_and_write": false, 00:09:01.432 "abort": false, 00:09:01.432 "seek_hole": false, 00:09:01.432 "seek_data": false, 00:09:01.432 "copy": false, 00:09:01.432 "nvme_iov_md": false 00:09:01.432 }, 00:09:01.432 "memory_domains": [ 00:09:01.432 { 00:09:01.432 "dma_device_id": "system", 00:09:01.432 "dma_device_type": 1 00:09:01.432 }, 00:09:01.432 { 00:09:01.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.432 "dma_device_type": 2 00:09:01.432 }, 00:09:01.432 { 00:09:01.432 "dma_device_id": "system", 00:09:01.432 "dma_device_type": 1 00:09:01.432 }, 00:09:01.432 { 00:09:01.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.432 "dma_device_type": 2 00:09:01.432 }, 00:09:01.432 { 00:09:01.432 "dma_device_id": "system", 00:09:01.432 "dma_device_type": 1 00:09:01.432 }, 00:09:01.432 { 00:09:01.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.432 "dma_device_type": 2 00:09:01.432 } 00:09:01.432 ], 00:09:01.432 "driver_specific": { 00:09:01.432 "raid": { 00:09:01.432 "uuid": "adc40d3e-c243-4488-9d0d-acc85b68ae08", 00:09:01.432 "strip_size_kb": 0, 00:09:01.432 "state": "online", 00:09:01.432 "raid_level": "raid1", 00:09:01.432 "superblock": false, 00:09:01.432 "num_base_bdevs": 3, 00:09:01.432 "num_base_bdevs_discovered": 3, 00:09:01.432 "num_base_bdevs_operational": 3, 00:09:01.432 "base_bdevs_list": [ 00:09:01.432 { 00:09:01.432 "name": "NewBaseBdev", 00:09:01.432 "uuid": "758c2d79-6735-4217-9835-3e660179d5a6", 00:09:01.432 "is_configured": true, 00:09:01.432 "data_offset": 0, 00:09:01.432 "data_size": 65536 00:09:01.432 }, 00:09:01.432 { 00:09:01.432 "name": "BaseBdev2", 00:09:01.432 "uuid": "747256f5-0e81-404a-94ad-a71c11e7268f", 00:09:01.432 "is_configured": true, 00:09:01.432 "data_offset": 0, 00:09:01.432 "data_size": 65536 00:09:01.432 }, 00:09:01.432 { 00:09:01.432 "name": "BaseBdev3", 00:09:01.432 "uuid": "37bbaa80-9393-43d5-b203-5efb6321be2a", 00:09:01.432 "is_configured": true, 00:09:01.432 "data_offset": 0, 00:09:01.432 "data_size": 65536 00:09:01.432 } 00:09:01.432 ] 00:09:01.432 } 00:09:01.432 } 00:09:01.432 }' 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:01.432 BaseBdev2 00:09:01.432 BaseBdev3' 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.432 13:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.691 [2024-11-26 13:21:50.149919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:01.691 [2024-11-26 13:21:50.149947] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.691 [2024-11-26 13:21:50.150005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.691 [2024-11-26 13:21:50.150341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.691 [2024-11-26 13:21:50.150357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 66918 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 66918 ']' 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 66918 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66918 00:09:01.691 killing process with pid 66918 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66918' 00:09:01.691 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 66918 00:09:01.692 [2024-11-26 13:21:50.189874] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.692 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 66918 00:09:01.950 [2024-11-26 13:21:50.391349] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:02.887 00:09:02.887 real 0m11.371s 00:09:02.887 user 0m19.229s 00:09:02.887 sys 0m1.566s 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.887 ************************************ 00:09:02.887 END TEST raid_state_function_test 00:09:02.887 ************************************ 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.887 13:21:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:02.887 13:21:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:02.887 13:21:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.887 13:21:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.887 ************************************ 00:09:02.887 START TEST raid_state_function_test_sb 00:09:02.887 ************************************ 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67545 00:09:02.887 Process raid pid: 67545 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67545' 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67545 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67545 ']' 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.887 13:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.888 13:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.888 [2024-11-26 13:21:51.383179] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:09:02.888 [2024-11-26 13:21:51.383346] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.147 [2024-11-26 13:21:51.543770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.147 [2024-11-26 13:21:51.653042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.405 [2024-11-26 13:21:51.822818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.405 [2024-11-26 13:21:51.823079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.973 [2024-11-26 13:21:52.387451] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.973 [2024-11-26 13:21:52.387523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.973 [2024-11-26 13:21:52.387538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.973 [2024-11-26 13:21:52.387553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.973 [2024-11-26 13:21:52.387561] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:03.973 [2024-11-26 13:21:52.387589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.973 "name": "Existed_Raid", 00:09:03.973 "uuid": "837c5ebe-689e-49ff-a061-d28fa20abdf0", 00:09:03.973 "strip_size_kb": 0, 00:09:03.973 "state": "configuring", 00:09:03.973 "raid_level": "raid1", 00:09:03.973 "superblock": true, 00:09:03.973 "num_base_bdevs": 3, 00:09:03.973 "num_base_bdevs_discovered": 0, 00:09:03.973 "num_base_bdevs_operational": 3, 00:09:03.973 "base_bdevs_list": [ 00:09:03.973 { 00:09:03.973 "name": "BaseBdev1", 00:09:03.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.973 "is_configured": false, 00:09:03.973 "data_offset": 0, 00:09:03.973 "data_size": 0 00:09:03.973 }, 00:09:03.973 { 00:09:03.973 "name": "BaseBdev2", 00:09:03.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.973 "is_configured": false, 00:09:03.973 "data_offset": 0, 00:09:03.973 "data_size": 0 00:09:03.973 }, 00:09:03.973 { 00:09:03.973 "name": "BaseBdev3", 00:09:03.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.973 "is_configured": false, 00:09:03.973 "data_offset": 0, 00:09:03.973 "data_size": 0 00:09:03.973 } 00:09:03.973 ] 00:09:03.973 }' 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.973 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.542 [2024-11-26 13:21:52.915479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:04.542 [2024-11-26 13:21:52.915512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.542 [2024-11-26 13:21:52.923489] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.542 [2024-11-26 13:21:52.923551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.542 [2024-11-26 13:21:52.923578] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.542 [2024-11-26 13:21:52.923592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.542 [2024-11-26 13:21:52.923614] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.542 [2024-11-26 13:21:52.923641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.542 [2024-11-26 13:21:52.961563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.542 BaseBdev1 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.542 [ 00:09:04.542 { 00:09:04.542 "name": "BaseBdev1", 00:09:04.542 "aliases": [ 00:09:04.542 "c10faa62-5acf-447f-8096-13578a6402be" 00:09:04.542 ], 00:09:04.542 "product_name": "Malloc disk", 00:09:04.542 "block_size": 512, 00:09:04.542 "num_blocks": 65536, 00:09:04.542 "uuid": "c10faa62-5acf-447f-8096-13578a6402be", 00:09:04.542 "assigned_rate_limits": { 00:09:04.542 "rw_ios_per_sec": 0, 00:09:04.542 "rw_mbytes_per_sec": 0, 00:09:04.542 "r_mbytes_per_sec": 0, 00:09:04.542 "w_mbytes_per_sec": 0 00:09:04.542 }, 00:09:04.542 "claimed": true, 00:09:04.542 "claim_type": "exclusive_write", 00:09:04.542 "zoned": false, 00:09:04.542 "supported_io_types": { 00:09:04.542 "read": true, 00:09:04.542 "write": true, 00:09:04.542 "unmap": true, 00:09:04.542 "flush": true, 00:09:04.542 "reset": true, 00:09:04.542 "nvme_admin": false, 00:09:04.542 "nvme_io": false, 00:09:04.542 "nvme_io_md": false, 00:09:04.542 "write_zeroes": true, 00:09:04.542 "zcopy": true, 00:09:04.542 "get_zone_info": false, 00:09:04.542 "zone_management": false, 00:09:04.542 "zone_append": false, 00:09:04.542 "compare": false, 00:09:04.542 "compare_and_write": false, 00:09:04.542 "abort": true, 00:09:04.542 "seek_hole": false, 00:09:04.542 "seek_data": false, 00:09:04.542 "copy": true, 00:09:04.542 "nvme_iov_md": false 00:09:04.542 }, 00:09:04.542 "memory_domains": [ 00:09:04.542 { 00:09:04.542 "dma_device_id": "system", 00:09:04.542 "dma_device_type": 1 00:09:04.542 }, 00:09:04.542 { 00:09:04.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.542 "dma_device_type": 2 00:09:04.542 } 00:09:04.542 ], 00:09:04.542 "driver_specific": {} 00:09:04.542 } 00:09:04.542 ] 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.542 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.542 13:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.542 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.542 "name": "Existed_Raid", 00:09:04.542 "uuid": "315d064e-fcbd-4543-a3d9-3d2ff7cfef15", 00:09:04.542 "strip_size_kb": 0, 00:09:04.542 "state": "configuring", 00:09:04.542 "raid_level": "raid1", 00:09:04.542 "superblock": true, 00:09:04.542 "num_base_bdevs": 3, 00:09:04.542 "num_base_bdevs_discovered": 1, 00:09:04.542 "num_base_bdevs_operational": 3, 00:09:04.542 "base_bdevs_list": [ 00:09:04.542 { 00:09:04.542 "name": "BaseBdev1", 00:09:04.542 "uuid": "c10faa62-5acf-447f-8096-13578a6402be", 00:09:04.542 "is_configured": true, 00:09:04.542 "data_offset": 2048, 00:09:04.542 "data_size": 63488 00:09:04.542 }, 00:09:04.542 { 00:09:04.542 "name": "BaseBdev2", 00:09:04.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.542 "is_configured": false, 00:09:04.542 "data_offset": 0, 00:09:04.542 "data_size": 0 00:09:04.542 }, 00:09:04.542 { 00:09:04.542 "name": "BaseBdev3", 00:09:04.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.542 "is_configured": false, 00:09:04.542 "data_offset": 0, 00:09:04.542 "data_size": 0 00:09:04.542 } 00:09:04.542 ] 00:09:04.542 }' 00:09:04.542 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.542 13:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.110 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.110 13:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.110 13:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.110 [2024-11-26 13:21:53.517701] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.110 [2024-11-26 13:21:53.517736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:05.110 13:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.110 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.110 13:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.110 13:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.110 [2024-11-26 13:21:53.525766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.111 [2024-11-26 13:21:53.527968] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.111 [2024-11-26 13:21:53.528013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.111 [2024-11-26 13:21:53.528027] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.111 [2024-11-26 13:21:53.528039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.111 "name": "Existed_Raid", 00:09:05.111 "uuid": "5abe7a7e-fd84-4756-a765-cbdfd60ba638", 00:09:05.111 "strip_size_kb": 0, 00:09:05.111 "state": "configuring", 00:09:05.111 "raid_level": "raid1", 00:09:05.111 "superblock": true, 00:09:05.111 "num_base_bdevs": 3, 00:09:05.111 "num_base_bdevs_discovered": 1, 00:09:05.111 "num_base_bdevs_operational": 3, 00:09:05.111 "base_bdevs_list": [ 00:09:05.111 { 00:09:05.111 "name": "BaseBdev1", 00:09:05.111 "uuid": "c10faa62-5acf-447f-8096-13578a6402be", 00:09:05.111 "is_configured": true, 00:09:05.111 "data_offset": 2048, 00:09:05.111 "data_size": 63488 00:09:05.111 }, 00:09:05.111 { 00:09:05.111 "name": "BaseBdev2", 00:09:05.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.111 "is_configured": false, 00:09:05.111 "data_offset": 0, 00:09:05.111 "data_size": 0 00:09:05.111 }, 00:09:05.111 { 00:09:05.111 "name": "BaseBdev3", 00:09:05.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.111 "is_configured": false, 00:09:05.111 "data_offset": 0, 00:09:05.111 "data_size": 0 00:09:05.111 } 00:09:05.111 ] 00:09:05.111 }' 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.111 13:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.717 [2024-11-26 13:21:54.072898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.717 BaseBdev2 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.717 [ 00:09:05.717 { 00:09:05.717 "name": "BaseBdev2", 00:09:05.717 "aliases": [ 00:09:05.717 "5129019c-09a0-41de-9ddb-6648e64989df" 00:09:05.717 ], 00:09:05.717 "product_name": "Malloc disk", 00:09:05.717 "block_size": 512, 00:09:05.717 "num_blocks": 65536, 00:09:05.717 "uuid": "5129019c-09a0-41de-9ddb-6648e64989df", 00:09:05.717 "assigned_rate_limits": { 00:09:05.717 "rw_ios_per_sec": 0, 00:09:05.717 "rw_mbytes_per_sec": 0, 00:09:05.717 "r_mbytes_per_sec": 0, 00:09:05.717 "w_mbytes_per_sec": 0 00:09:05.717 }, 00:09:05.717 "claimed": true, 00:09:05.717 "claim_type": "exclusive_write", 00:09:05.717 "zoned": false, 00:09:05.717 "supported_io_types": { 00:09:05.717 "read": true, 00:09:05.717 "write": true, 00:09:05.717 "unmap": true, 00:09:05.717 "flush": true, 00:09:05.717 "reset": true, 00:09:05.717 "nvme_admin": false, 00:09:05.717 "nvme_io": false, 00:09:05.717 "nvme_io_md": false, 00:09:05.717 "write_zeroes": true, 00:09:05.717 "zcopy": true, 00:09:05.717 "get_zone_info": false, 00:09:05.717 "zone_management": false, 00:09:05.717 "zone_append": false, 00:09:05.717 "compare": false, 00:09:05.717 "compare_and_write": false, 00:09:05.717 "abort": true, 00:09:05.717 "seek_hole": false, 00:09:05.717 "seek_data": false, 00:09:05.717 "copy": true, 00:09:05.717 "nvme_iov_md": false 00:09:05.717 }, 00:09:05.717 "memory_domains": [ 00:09:05.717 { 00:09:05.717 "dma_device_id": "system", 00:09:05.717 "dma_device_type": 1 00:09:05.717 }, 00:09:05.717 { 00:09:05.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.717 "dma_device_type": 2 00:09:05.717 } 00:09:05.717 ], 00:09:05.717 "driver_specific": {} 00:09:05.717 } 00:09:05.717 ] 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.717 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.717 "name": "Existed_Raid", 00:09:05.717 "uuid": "5abe7a7e-fd84-4756-a765-cbdfd60ba638", 00:09:05.717 "strip_size_kb": 0, 00:09:05.717 "state": "configuring", 00:09:05.717 "raid_level": "raid1", 00:09:05.717 "superblock": true, 00:09:05.717 "num_base_bdevs": 3, 00:09:05.717 "num_base_bdevs_discovered": 2, 00:09:05.717 "num_base_bdevs_operational": 3, 00:09:05.717 "base_bdevs_list": [ 00:09:05.717 { 00:09:05.717 "name": "BaseBdev1", 00:09:05.717 "uuid": "c10faa62-5acf-447f-8096-13578a6402be", 00:09:05.717 "is_configured": true, 00:09:05.717 "data_offset": 2048, 00:09:05.717 "data_size": 63488 00:09:05.717 }, 00:09:05.717 { 00:09:05.717 "name": "BaseBdev2", 00:09:05.717 "uuid": "5129019c-09a0-41de-9ddb-6648e64989df", 00:09:05.717 "is_configured": true, 00:09:05.717 "data_offset": 2048, 00:09:05.717 "data_size": 63488 00:09:05.717 }, 00:09:05.717 { 00:09:05.717 "name": "BaseBdev3", 00:09:05.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.717 "is_configured": false, 00:09:05.717 "data_offset": 0, 00:09:05.717 "data_size": 0 00:09:05.718 } 00:09:05.718 ] 00:09:05.718 }' 00:09:05.718 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.718 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.315 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:06.315 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.315 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.315 [2024-11-26 13:21:54.654016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.315 [2024-11-26 13:21:54.654295] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:06.315 [2024-11-26 13:21:54.654324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:06.315 BaseBdev3 00:09:06.315 [2024-11-26 13:21:54.654649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:06.315 [2024-11-26 13:21:54.654828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:06.315 [2024-11-26 13:21:54.654860] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:06.315 [2024-11-26 13:21:54.655005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.315 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.315 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:06.315 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:06.315 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.315 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:06.315 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.315 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.316 [ 00:09:06.316 { 00:09:06.316 "name": "BaseBdev3", 00:09:06.316 "aliases": [ 00:09:06.316 "191c6b88-af19-4fc2-aa58-a9c19853fccf" 00:09:06.316 ], 00:09:06.316 "product_name": "Malloc disk", 00:09:06.316 "block_size": 512, 00:09:06.316 "num_blocks": 65536, 00:09:06.316 "uuid": "191c6b88-af19-4fc2-aa58-a9c19853fccf", 00:09:06.316 "assigned_rate_limits": { 00:09:06.316 "rw_ios_per_sec": 0, 00:09:06.316 "rw_mbytes_per_sec": 0, 00:09:06.316 "r_mbytes_per_sec": 0, 00:09:06.316 "w_mbytes_per_sec": 0 00:09:06.316 }, 00:09:06.316 "claimed": true, 00:09:06.316 "claim_type": "exclusive_write", 00:09:06.316 "zoned": false, 00:09:06.316 "supported_io_types": { 00:09:06.316 "read": true, 00:09:06.316 "write": true, 00:09:06.316 "unmap": true, 00:09:06.316 "flush": true, 00:09:06.316 "reset": true, 00:09:06.316 "nvme_admin": false, 00:09:06.316 "nvme_io": false, 00:09:06.316 "nvme_io_md": false, 00:09:06.316 "write_zeroes": true, 00:09:06.316 "zcopy": true, 00:09:06.316 "get_zone_info": false, 00:09:06.316 "zone_management": false, 00:09:06.316 "zone_append": false, 00:09:06.316 "compare": false, 00:09:06.316 "compare_and_write": false, 00:09:06.316 "abort": true, 00:09:06.316 "seek_hole": false, 00:09:06.316 "seek_data": false, 00:09:06.316 "copy": true, 00:09:06.316 "nvme_iov_md": false 00:09:06.316 }, 00:09:06.316 "memory_domains": [ 00:09:06.316 { 00:09:06.316 "dma_device_id": "system", 00:09:06.316 "dma_device_type": 1 00:09:06.316 }, 00:09:06.316 { 00:09:06.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.316 "dma_device_type": 2 00:09:06.316 } 00:09:06.316 ], 00:09:06.316 "driver_specific": {} 00:09:06.316 } 00:09:06.316 ] 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.316 "name": "Existed_Raid", 00:09:06.316 "uuid": "5abe7a7e-fd84-4756-a765-cbdfd60ba638", 00:09:06.316 "strip_size_kb": 0, 00:09:06.316 "state": "online", 00:09:06.316 "raid_level": "raid1", 00:09:06.316 "superblock": true, 00:09:06.316 "num_base_bdevs": 3, 00:09:06.316 "num_base_bdevs_discovered": 3, 00:09:06.316 "num_base_bdevs_operational": 3, 00:09:06.316 "base_bdevs_list": [ 00:09:06.316 { 00:09:06.316 "name": "BaseBdev1", 00:09:06.316 "uuid": "c10faa62-5acf-447f-8096-13578a6402be", 00:09:06.316 "is_configured": true, 00:09:06.316 "data_offset": 2048, 00:09:06.316 "data_size": 63488 00:09:06.316 }, 00:09:06.316 { 00:09:06.316 "name": "BaseBdev2", 00:09:06.316 "uuid": "5129019c-09a0-41de-9ddb-6648e64989df", 00:09:06.316 "is_configured": true, 00:09:06.316 "data_offset": 2048, 00:09:06.316 "data_size": 63488 00:09:06.316 }, 00:09:06.316 { 00:09:06.316 "name": "BaseBdev3", 00:09:06.316 "uuid": "191c6b88-af19-4fc2-aa58-a9c19853fccf", 00:09:06.316 "is_configured": true, 00:09:06.316 "data_offset": 2048, 00:09:06.316 "data_size": 63488 00:09:06.316 } 00:09:06.316 ] 00:09:06.316 }' 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.316 13:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.883 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:06.883 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:06.883 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.883 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.883 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.883 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.883 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:06.883 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.883 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.883 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.883 [2024-11-26 13:21:55.206514] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.883 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.883 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.883 "name": "Existed_Raid", 00:09:06.883 "aliases": [ 00:09:06.883 "5abe7a7e-fd84-4756-a765-cbdfd60ba638" 00:09:06.883 ], 00:09:06.883 "product_name": "Raid Volume", 00:09:06.883 "block_size": 512, 00:09:06.883 "num_blocks": 63488, 00:09:06.883 "uuid": "5abe7a7e-fd84-4756-a765-cbdfd60ba638", 00:09:06.883 "assigned_rate_limits": { 00:09:06.883 "rw_ios_per_sec": 0, 00:09:06.883 "rw_mbytes_per_sec": 0, 00:09:06.883 "r_mbytes_per_sec": 0, 00:09:06.883 "w_mbytes_per_sec": 0 00:09:06.883 }, 00:09:06.883 "claimed": false, 00:09:06.883 "zoned": false, 00:09:06.883 "supported_io_types": { 00:09:06.883 "read": true, 00:09:06.883 "write": true, 00:09:06.883 "unmap": false, 00:09:06.883 "flush": false, 00:09:06.883 "reset": true, 00:09:06.883 "nvme_admin": false, 00:09:06.883 "nvme_io": false, 00:09:06.883 "nvme_io_md": false, 00:09:06.883 "write_zeroes": true, 00:09:06.883 "zcopy": false, 00:09:06.883 "get_zone_info": false, 00:09:06.883 "zone_management": false, 00:09:06.883 "zone_append": false, 00:09:06.883 "compare": false, 00:09:06.883 "compare_and_write": false, 00:09:06.883 "abort": false, 00:09:06.883 "seek_hole": false, 00:09:06.883 "seek_data": false, 00:09:06.883 "copy": false, 00:09:06.883 "nvme_iov_md": false 00:09:06.883 }, 00:09:06.883 "memory_domains": [ 00:09:06.883 { 00:09:06.883 "dma_device_id": "system", 00:09:06.883 "dma_device_type": 1 00:09:06.883 }, 00:09:06.883 { 00:09:06.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.883 "dma_device_type": 2 00:09:06.883 }, 00:09:06.883 { 00:09:06.883 "dma_device_id": "system", 00:09:06.883 "dma_device_type": 1 00:09:06.883 }, 00:09:06.883 { 00:09:06.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.883 "dma_device_type": 2 00:09:06.883 }, 00:09:06.883 { 00:09:06.883 "dma_device_id": "system", 00:09:06.883 "dma_device_type": 1 00:09:06.883 }, 00:09:06.883 { 00:09:06.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.883 "dma_device_type": 2 00:09:06.883 } 00:09:06.883 ], 00:09:06.883 "driver_specific": { 00:09:06.883 "raid": { 00:09:06.883 "uuid": "5abe7a7e-fd84-4756-a765-cbdfd60ba638", 00:09:06.883 "strip_size_kb": 0, 00:09:06.884 "state": "online", 00:09:06.884 "raid_level": "raid1", 00:09:06.884 "superblock": true, 00:09:06.884 "num_base_bdevs": 3, 00:09:06.884 "num_base_bdevs_discovered": 3, 00:09:06.884 "num_base_bdevs_operational": 3, 00:09:06.884 "base_bdevs_list": [ 00:09:06.884 { 00:09:06.884 "name": "BaseBdev1", 00:09:06.884 "uuid": "c10faa62-5acf-447f-8096-13578a6402be", 00:09:06.884 "is_configured": true, 00:09:06.884 "data_offset": 2048, 00:09:06.884 "data_size": 63488 00:09:06.884 }, 00:09:06.884 { 00:09:06.884 "name": "BaseBdev2", 00:09:06.884 "uuid": "5129019c-09a0-41de-9ddb-6648e64989df", 00:09:06.884 "is_configured": true, 00:09:06.884 "data_offset": 2048, 00:09:06.884 "data_size": 63488 00:09:06.884 }, 00:09:06.884 { 00:09:06.884 "name": "BaseBdev3", 00:09:06.884 "uuid": "191c6b88-af19-4fc2-aa58-a9c19853fccf", 00:09:06.884 "is_configured": true, 00:09:06.884 "data_offset": 2048, 00:09:06.884 "data_size": 63488 00:09:06.884 } 00:09:06.884 ] 00:09:06.884 } 00:09:06.884 } 00:09:06.884 }' 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:06.884 BaseBdev2 00:09:06.884 BaseBdev3' 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.884 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.143 [2024-11-26 13:21:55.522309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.143 "name": "Existed_Raid", 00:09:07.143 "uuid": "5abe7a7e-fd84-4756-a765-cbdfd60ba638", 00:09:07.143 "strip_size_kb": 0, 00:09:07.143 "state": "online", 00:09:07.143 "raid_level": "raid1", 00:09:07.143 "superblock": true, 00:09:07.143 "num_base_bdevs": 3, 00:09:07.143 "num_base_bdevs_discovered": 2, 00:09:07.143 "num_base_bdevs_operational": 2, 00:09:07.143 "base_bdevs_list": [ 00:09:07.143 { 00:09:07.143 "name": null, 00:09:07.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.143 "is_configured": false, 00:09:07.143 "data_offset": 0, 00:09:07.143 "data_size": 63488 00:09:07.143 }, 00:09:07.143 { 00:09:07.143 "name": "BaseBdev2", 00:09:07.143 "uuid": "5129019c-09a0-41de-9ddb-6648e64989df", 00:09:07.143 "is_configured": true, 00:09:07.143 "data_offset": 2048, 00:09:07.143 "data_size": 63488 00:09:07.143 }, 00:09:07.143 { 00:09:07.143 "name": "BaseBdev3", 00:09:07.143 "uuid": "191c6b88-af19-4fc2-aa58-a9c19853fccf", 00:09:07.143 "is_configured": true, 00:09:07.143 "data_offset": 2048, 00:09:07.143 "data_size": 63488 00:09:07.143 } 00:09:07.143 ] 00:09:07.143 }' 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.143 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.712 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:07.712 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.713 [2024-11-26 13:21:56.167331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.713 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.972 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:07.972 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:07.972 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:07.972 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.972 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.972 [2024-11-26 13:21:56.296905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:07.972 [2024-11-26 13:21:56.297153] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.972 [2024-11-26 13:21:56.362793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.972 [2024-11-26 13:21:56.363026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.972 [2024-11-26 13:21:56.363197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:07.972 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.972 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:07.972 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.972 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.972 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.973 BaseBdev2 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.973 [ 00:09:07.973 { 00:09:07.973 "name": "BaseBdev2", 00:09:07.973 "aliases": [ 00:09:07.973 "274551aa-faec-40fb-a165-950b0481261c" 00:09:07.973 ], 00:09:07.973 "product_name": "Malloc disk", 00:09:07.973 "block_size": 512, 00:09:07.973 "num_blocks": 65536, 00:09:07.973 "uuid": "274551aa-faec-40fb-a165-950b0481261c", 00:09:07.973 "assigned_rate_limits": { 00:09:07.973 "rw_ios_per_sec": 0, 00:09:07.973 "rw_mbytes_per_sec": 0, 00:09:07.973 "r_mbytes_per_sec": 0, 00:09:07.973 "w_mbytes_per_sec": 0 00:09:07.973 }, 00:09:07.973 "claimed": false, 00:09:07.973 "zoned": false, 00:09:07.973 "supported_io_types": { 00:09:07.973 "read": true, 00:09:07.973 "write": true, 00:09:07.973 "unmap": true, 00:09:07.973 "flush": true, 00:09:07.973 "reset": true, 00:09:07.973 "nvme_admin": false, 00:09:07.973 "nvme_io": false, 00:09:07.973 "nvme_io_md": false, 00:09:07.973 "write_zeroes": true, 00:09:07.973 "zcopy": true, 00:09:07.973 "get_zone_info": false, 00:09:07.973 "zone_management": false, 00:09:07.973 "zone_append": false, 00:09:07.973 "compare": false, 00:09:07.973 "compare_and_write": false, 00:09:07.973 "abort": true, 00:09:07.973 "seek_hole": false, 00:09:07.973 "seek_data": false, 00:09:07.973 "copy": true, 00:09:07.973 "nvme_iov_md": false 00:09:07.973 }, 00:09:07.973 "memory_domains": [ 00:09:07.973 { 00:09:07.973 "dma_device_id": "system", 00:09:07.973 "dma_device_type": 1 00:09:07.973 }, 00:09:07.973 { 00:09:07.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.973 "dma_device_type": 2 00:09:07.973 } 00:09:07.973 ], 00:09:07.973 "driver_specific": {} 00:09:07.973 } 00:09:07.973 ] 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.973 BaseBdev3 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.973 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.232 [ 00:09:08.232 { 00:09:08.232 "name": "BaseBdev3", 00:09:08.232 "aliases": [ 00:09:08.232 "76ee1f1e-875c-4d42-bff7-c474fda75529" 00:09:08.232 ], 00:09:08.232 "product_name": "Malloc disk", 00:09:08.232 "block_size": 512, 00:09:08.232 "num_blocks": 65536, 00:09:08.232 "uuid": "76ee1f1e-875c-4d42-bff7-c474fda75529", 00:09:08.232 "assigned_rate_limits": { 00:09:08.232 "rw_ios_per_sec": 0, 00:09:08.232 "rw_mbytes_per_sec": 0, 00:09:08.232 "r_mbytes_per_sec": 0, 00:09:08.233 "w_mbytes_per_sec": 0 00:09:08.233 }, 00:09:08.233 "claimed": false, 00:09:08.233 "zoned": false, 00:09:08.233 "supported_io_types": { 00:09:08.233 "read": true, 00:09:08.233 "write": true, 00:09:08.233 "unmap": true, 00:09:08.233 "flush": true, 00:09:08.233 "reset": true, 00:09:08.233 "nvme_admin": false, 00:09:08.233 "nvme_io": false, 00:09:08.233 "nvme_io_md": false, 00:09:08.233 "write_zeroes": true, 00:09:08.233 "zcopy": true, 00:09:08.233 "get_zone_info": false, 00:09:08.233 "zone_management": false, 00:09:08.233 "zone_append": false, 00:09:08.233 "compare": false, 00:09:08.233 "compare_and_write": false, 00:09:08.233 "abort": true, 00:09:08.233 "seek_hole": false, 00:09:08.233 "seek_data": false, 00:09:08.233 "copy": true, 00:09:08.233 "nvme_iov_md": false 00:09:08.233 }, 00:09:08.233 "memory_domains": [ 00:09:08.233 { 00:09:08.233 "dma_device_id": "system", 00:09:08.233 "dma_device_type": 1 00:09:08.233 }, 00:09:08.233 { 00:09:08.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.233 "dma_device_type": 2 00:09:08.233 } 00:09:08.233 ], 00:09:08.233 "driver_specific": {} 00:09:08.233 } 00:09:08.233 ] 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.233 [2024-11-26 13:21:56.559026] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.233 [2024-11-26 13:21:56.559080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.233 [2024-11-26 13:21:56.559103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.233 [2024-11-26 13:21:56.561286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.233 "name": "Existed_Raid", 00:09:08.233 "uuid": "0f97bd73-4821-4b7e-acc8-3cbe167eb0d6", 00:09:08.233 "strip_size_kb": 0, 00:09:08.233 "state": "configuring", 00:09:08.233 "raid_level": "raid1", 00:09:08.233 "superblock": true, 00:09:08.233 "num_base_bdevs": 3, 00:09:08.233 "num_base_bdevs_discovered": 2, 00:09:08.233 "num_base_bdevs_operational": 3, 00:09:08.233 "base_bdevs_list": [ 00:09:08.233 { 00:09:08.233 "name": "BaseBdev1", 00:09:08.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.233 "is_configured": false, 00:09:08.233 "data_offset": 0, 00:09:08.233 "data_size": 0 00:09:08.233 }, 00:09:08.233 { 00:09:08.233 "name": "BaseBdev2", 00:09:08.233 "uuid": "274551aa-faec-40fb-a165-950b0481261c", 00:09:08.233 "is_configured": true, 00:09:08.233 "data_offset": 2048, 00:09:08.233 "data_size": 63488 00:09:08.233 }, 00:09:08.233 { 00:09:08.233 "name": "BaseBdev3", 00:09:08.233 "uuid": "76ee1f1e-875c-4d42-bff7-c474fda75529", 00:09:08.233 "is_configured": true, 00:09:08.233 "data_offset": 2048, 00:09:08.233 "data_size": 63488 00:09:08.233 } 00:09:08.233 ] 00:09:08.233 }' 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.233 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.801 [2024-11-26 13:21:57.083111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.801 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.801 "name": "Existed_Raid", 00:09:08.801 "uuid": "0f97bd73-4821-4b7e-acc8-3cbe167eb0d6", 00:09:08.801 "strip_size_kb": 0, 00:09:08.801 "state": "configuring", 00:09:08.801 "raid_level": "raid1", 00:09:08.801 "superblock": true, 00:09:08.801 "num_base_bdevs": 3, 00:09:08.801 "num_base_bdevs_discovered": 1, 00:09:08.801 "num_base_bdevs_operational": 3, 00:09:08.801 "base_bdevs_list": [ 00:09:08.801 { 00:09:08.801 "name": "BaseBdev1", 00:09:08.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.802 "is_configured": false, 00:09:08.802 "data_offset": 0, 00:09:08.802 "data_size": 0 00:09:08.802 }, 00:09:08.802 { 00:09:08.802 "name": null, 00:09:08.802 "uuid": "274551aa-faec-40fb-a165-950b0481261c", 00:09:08.802 "is_configured": false, 00:09:08.802 "data_offset": 0, 00:09:08.802 "data_size": 63488 00:09:08.802 }, 00:09:08.802 { 00:09:08.802 "name": "BaseBdev3", 00:09:08.802 "uuid": "76ee1f1e-875c-4d42-bff7-c474fda75529", 00:09:08.802 "is_configured": true, 00:09:08.802 "data_offset": 2048, 00:09:08.802 "data_size": 63488 00:09:08.802 } 00:09:08.802 ] 00:09:08.802 }' 00:09:08.802 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.802 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.060 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.060 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:09.060 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.060 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.060 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.320 [2024-11-26 13:21:57.687268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.320 BaseBdev1 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.320 [ 00:09:09.320 { 00:09:09.320 "name": "BaseBdev1", 00:09:09.320 "aliases": [ 00:09:09.320 "59cc6204-4111-4aa9-a73c-0353eca425af" 00:09:09.320 ], 00:09:09.320 "product_name": "Malloc disk", 00:09:09.320 "block_size": 512, 00:09:09.320 "num_blocks": 65536, 00:09:09.320 "uuid": "59cc6204-4111-4aa9-a73c-0353eca425af", 00:09:09.320 "assigned_rate_limits": { 00:09:09.320 "rw_ios_per_sec": 0, 00:09:09.320 "rw_mbytes_per_sec": 0, 00:09:09.320 "r_mbytes_per_sec": 0, 00:09:09.320 "w_mbytes_per_sec": 0 00:09:09.320 }, 00:09:09.320 "claimed": true, 00:09:09.320 "claim_type": "exclusive_write", 00:09:09.320 "zoned": false, 00:09:09.320 "supported_io_types": { 00:09:09.320 "read": true, 00:09:09.320 "write": true, 00:09:09.320 "unmap": true, 00:09:09.320 "flush": true, 00:09:09.320 "reset": true, 00:09:09.320 "nvme_admin": false, 00:09:09.320 "nvme_io": false, 00:09:09.320 "nvme_io_md": false, 00:09:09.320 "write_zeroes": true, 00:09:09.320 "zcopy": true, 00:09:09.320 "get_zone_info": false, 00:09:09.320 "zone_management": false, 00:09:09.320 "zone_append": false, 00:09:09.320 "compare": false, 00:09:09.320 "compare_and_write": false, 00:09:09.320 "abort": true, 00:09:09.320 "seek_hole": false, 00:09:09.320 "seek_data": false, 00:09:09.320 "copy": true, 00:09:09.320 "nvme_iov_md": false 00:09:09.320 }, 00:09:09.320 "memory_domains": [ 00:09:09.320 { 00:09:09.320 "dma_device_id": "system", 00:09:09.320 "dma_device_type": 1 00:09:09.320 }, 00:09:09.320 { 00:09:09.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.320 "dma_device_type": 2 00:09:09.320 } 00:09:09.320 ], 00:09:09.320 "driver_specific": {} 00:09:09.320 } 00:09:09.320 ] 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.320 "name": "Existed_Raid", 00:09:09.320 "uuid": "0f97bd73-4821-4b7e-acc8-3cbe167eb0d6", 00:09:09.320 "strip_size_kb": 0, 00:09:09.320 "state": "configuring", 00:09:09.320 "raid_level": "raid1", 00:09:09.320 "superblock": true, 00:09:09.320 "num_base_bdevs": 3, 00:09:09.320 "num_base_bdevs_discovered": 2, 00:09:09.320 "num_base_bdevs_operational": 3, 00:09:09.320 "base_bdevs_list": [ 00:09:09.320 { 00:09:09.320 "name": "BaseBdev1", 00:09:09.320 "uuid": "59cc6204-4111-4aa9-a73c-0353eca425af", 00:09:09.320 "is_configured": true, 00:09:09.320 "data_offset": 2048, 00:09:09.320 "data_size": 63488 00:09:09.320 }, 00:09:09.320 { 00:09:09.320 "name": null, 00:09:09.320 "uuid": "274551aa-faec-40fb-a165-950b0481261c", 00:09:09.320 "is_configured": false, 00:09:09.320 "data_offset": 0, 00:09:09.320 "data_size": 63488 00:09:09.320 }, 00:09:09.320 { 00:09:09.320 "name": "BaseBdev3", 00:09:09.320 "uuid": "76ee1f1e-875c-4d42-bff7-c474fda75529", 00:09:09.320 "is_configured": true, 00:09:09.320 "data_offset": 2048, 00:09:09.320 "data_size": 63488 00:09:09.320 } 00:09:09.320 ] 00:09:09.320 }' 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.320 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.889 [2024-11-26 13:21:58.291458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.889 "name": "Existed_Raid", 00:09:09.889 "uuid": "0f97bd73-4821-4b7e-acc8-3cbe167eb0d6", 00:09:09.889 "strip_size_kb": 0, 00:09:09.889 "state": "configuring", 00:09:09.889 "raid_level": "raid1", 00:09:09.889 "superblock": true, 00:09:09.889 "num_base_bdevs": 3, 00:09:09.889 "num_base_bdevs_discovered": 1, 00:09:09.889 "num_base_bdevs_operational": 3, 00:09:09.889 "base_bdevs_list": [ 00:09:09.889 { 00:09:09.889 "name": "BaseBdev1", 00:09:09.889 "uuid": "59cc6204-4111-4aa9-a73c-0353eca425af", 00:09:09.889 "is_configured": true, 00:09:09.889 "data_offset": 2048, 00:09:09.889 "data_size": 63488 00:09:09.889 }, 00:09:09.889 { 00:09:09.889 "name": null, 00:09:09.889 "uuid": "274551aa-faec-40fb-a165-950b0481261c", 00:09:09.889 "is_configured": false, 00:09:09.889 "data_offset": 0, 00:09:09.889 "data_size": 63488 00:09:09.889 }, 00:09:09.889 { 00:09:09.889 "name": null, 00:09:09.889 "uuid": "76ee1f1e-875c-4d42-bff7-c474fda75529", 00:09:09.889 "is_configured": false, 00:09:09.889 "data_offset": 0, 00:09:09.889 "data_size": 63488 00:09:09.889 } 00:09:09.889 ] 00:09:09.889 }' 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.889 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.458 [2024-11-26 13:21:58.871624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.458 "name": "Existed_Raid", 00:09:10.458 "uuid": "0f97bd73-4821-4b7e-acc8-3cbe167eb0d6", 00:09:10.458 "strip_size_kb": 0, 00:09:10.458 "state": "configuring", 00:09:10.458 "raid_level": "raid1", 00:09:10.458 "superblock": true, 00:09:10.458 "num_base_bdevs": 3, 00:09:10.458 "num_base_bdevs_discovered": 2, 00:09:10.458 "num_base_bdevs_operational": 3, 00:09:10.458 "base_bdevs_list": [ 00:09:10.458 { 00:09:10.458 "name": "BaseBdev1", 00:09:10.458 "uuid": "59cc6204-4111-4aa9-a73c-0353eca425af", 00:09:10.458 "is_configured": true, 00:09:10.458 "data_offset": 2048, 00:09:10.458 "data_size": 63488 00:09:10.458 }, 00:09:10.458 { 00:09:10.458 "name": null, 00:09:10.458 "uuid": "274551aa-faec-40fb-a165-950b0481261c", 00:09:10.458 "is_configured": false, 00:09:10.458 "data_offset": 0, 00:09:10.458 "data_size": 63488 00:09:10.458 }, 00:09:10.458 { 00:09:10.458 "name": "BaseBdev3", 00:09:10.458 "uuid": "76ee1f1e-875c-4d42-bff7-c474fda75529", 00:09:10.458 "is_configured": true, 00:09:10.458 "data_offset": 2048, 00:09:10.458 "data_size": 63488 00:09:10.458 } 00:09:10.458 ] 00:09:10.458 }' 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.458 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.026 [2024-11-26 13:21:59.447806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.026 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.027 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.027 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.027 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.027 "name": "Existed_Raid", 00:09:11.027 "uuid": "0f97bd73-4821-4b7e-acc8-3cbe167eb0d6", 00:09:11.027 "strip_size_kb": 0, 00:09:11.027 "state": "configuring", 00:09:11.027 "raid_level": "raid1", 00:09:11.027 "superblock": true, 00:09:11.027 "num_base_bdevs": 3, 00:09:11.027 "num_base_bdevs_discovered": 1, 00:09:11.027 "num_base_bdevs_operational": 3, 00:09:11.027 "base_bdevs_list": [ 00:09:11.027 { 00:09:11.027 "name": null, 00:09:11.027 "uuid": "59cc6204-4111-4aa9-a73c-0353eca425af", 00:09:11.027 "is_configured": false, 00:09:11.027 "data_offset": 0, 00:09:11.027 "data_size": 63488 00:09:11.027 }, 00:09:11.027 { 00:09:11.027 "name": null, 00:09:11.027 "uuid": "274551aa-faec-40fb-a165-950b0481261c", 00:09:11.027 "is_configured": false, 00:09:11.027 "data_offset": 0, 00:09:11.027 "data_size": 63488 00:09:11.027 }, 00:09:11.027 { 00:09:11.027 "name": "BaseBdev3", 00:09:11.027 "uuid": "76ee1f1e-875c-4d42-bff7-c474fda75529", 00:09:11.027 "is_configured": true, 00:09:11.027 "data_offset": 2048, 00:09:11.027 "data_size": 63488 00:09:11.027 } 00:09:11.027 ] 00:09:11.027 }' 00:09:11.027 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.027 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.595 [2024-11-26 13:22:00.092930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.595 "name": "Existed_Raid", 00:09:11.595 "uuid": "0f97bd73-4821-4b7e-acc8-3cbe167eb0d6", 00:09:11.595 "strip_size_kb": 0, 00:09:11.595 "state": "configuring", 00:09:11.595 "raid_level": "raid1", 00:09:11.595 "superblock": true, 00:09:11.595 "num_base_bdevs": 3, 00:09:11.595 "num_base_bdevs_discovered": 2, 00:09:11.595 "num_base_bdevs_operational": 3, 00:09:11.595 "base_bdevs_list": [ 00:09:11.595 { 00:09:11.595 "name": null, 00:09:11.595 "uuid": "59cc6204-4111-4aa9-a73c-0353eca425af", 00:09:11.595 "is_configured": false, 00:09:11.595 "data_offset": 0, 00:09:11.595 "data_size": 63488 00:09:11.595 }, 00:09:11.595 { 00:09:11.595 "name": "BaseBdev2", 00:09:11.595 "uuid": "274551aa-faec-40fb-a165-950b0481261c", 00:09:11.595 "is_configured": true, 00:09:11.595 "data_offset": 2048, 00:09:11.595 "data_size": 63488 00:09:11.595 }, 00:09:11.595 { 00:09:11.595 "name": "BaseBdev3", 00:09:11.595 "uuid": "76ee1f1e-875c-4d42-bff7-c474fda75529", 00:09:11.595 "is_configured": true, 00:09:11.595 "data_offset": 2048, 00:09:11.595 "data_size": 63488 00:09:11.595 } 00:09:11.595 ] 00:09:11.595 }' 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.595 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.162 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.162 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:12.162 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.162 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.162 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.162 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:12.162 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.162 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:12.162 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.162 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.162 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.420 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 59cc6204-4111-4aa9-a73c-0353eca425af 00:09:12.420 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.420 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.420 [2024-11-26 13:22:00.759450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:12.420 [2024-11-26 13:22:00.759662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:12.420 [2024-11-26 13:22:00.759677] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:12.420 [2024-11-26 13:22:00.759938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:12.420 NewBaseBdev 00:09:12.420 [2024-11-26 13:22:00.760103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:12.420 [2024-11-26 13:22:00.760122] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:12.420 [2024-11-26 13:22:00.760308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.420 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.420 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:12.420 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:12.420 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.420 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.421 [ 00:09:12.421 { 00:09:12.421 "name": "NewBaseBdev", 00:09:12.421 "aliases": [ 00:09:12.421 "59cc6204-4111-4aa9-a73c-0353eca425af" 00:09:12.421 ], 00:09:12.421 "product_name": "Malloc disk", 00:09:12.421 "block_size": 512, 00:09:12.421 "num_blocks": 65536, 00:09:12.421 "uuid": "59cc6204-4111-4aa9-a73c-0353eca425af", 00:09:12.421 "assigned_rate_limits": { 00:09:12.421 "rw_ios_per_sec": 0, 00:09:12.421 "rw_mbytes_per_sec": 0, 00:09:12.421 "r_mbytes_per_sec": 0, 00:09:12.421 "w_mbytes_per_sec": 0 00:09:12.421 }, 00:09:12.421 "claimed": true, 00:09:12.421 "claim_type": "exclusive_write", 00:09:12.421 "zoned": false, 00:09:12.421 "supported_io_types": { 00:09:12.421 "read": true, 00:09:12.421 "write": true, 00:09:12.421 "unmap": true, 00:09:12.421 "flush": true, 00:09:12.421 "reset": true, 00:09:12.421 "nvme_admin": false, 00:09:12.421 "nvme_io": false, 00:09:12.421 "nvme_io_md": false, 00:09:12.421 "write_zeroes": true, 00:09:12.421 "zcopy": true, 00:09:12.421 "get_zone_info": false, 00:09:12.421 "zone_management": false, 00:09:12.421 "zone_append": false, 00:09:12.421 "compare": false, 00:09:12.421 "compare_and_write": false, 00:09:12.421 "abort": true, 00:09:12.421 "seek_hole": false, 00:09:12.421 "seek_data": false, 00:09:12.421 "copy": true, 00:09:12.421 "nvme_iov_md": false 00:09:12.421 }, 00:09:12.421 "memory_domains": [ 00:09:12.421 { 00:09:12.421 "dma_device_id": "system", 00:09:12.421 "dma_device_type": 1 00:09:12.421 }, 00:09:12.421 { 00:09:12.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.421 "dma_device_type": 2 00:09:12.421 } 00:09:12.421 ], 00:09:12.421 "driver_specific": {} 00:09:12.421 } 00:09:12.421 ] 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.421 "name": "Existed_Raid", 00:09:12.421 "uuid": "0f97bd73-4821-4b7e-acc8-3cbe167eb0d6", 00:09:12.421 "strip_size_kb": 0, 00:09:12.421 "state": "online", 00:09:12.421 "raid_level": "raid1", 00:09:12.421 "superblock": true, 00:09:12.421 "num_base_bdevs": 3, 00:09:12.421 "num_base_bdevs_discovered": 3, 00:09:12.421 "num_base_bdevs_operational": 3, 00:09:12.421 "base_bdevs_list": [ 00:09:12.421 { 00:09:12.421 "name": "NewBaseBdev", 00:09:12.421 "uuid": "59cc6204-4111-4aa9-a73c-0353eca425af", 00:09:12.421 "is_configured": true, 00:09:12.421 "data_offset": 2048, 00:09:12.421 "data_size": 63488 00:09:12.421 }, 00:09:12.421 { 00:09:12.421 "name": "BaseBdev2", 00:09:12.421 "uuid": "274551aa-faec-40fb-a165-950b0481261c", 00:09:12.421 "is_configured": true, 00:09:12.421 "data_offset": 2048, 00:09:12.421 "data_size": 63488 00:09:12.421 }, 00:09:12.421 { 00:09:12.421 "name": "BaseBdev3", 00:09:12.421 "uuid": "76ee1f1e-875c-4d42-bff7-c474fda75529", 00:09:12.421 "is_configured": true, 00:09:12.421 "data_offset": 2048, 00:09:12.421 "data_size": 63488 00:09:12.421 } 00:09:12.421 ] 00:09:12.421 }' 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.421 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.987 [2024-11-26 13:22:01.323943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.987 "name": "Existed_Raid", 00:09:12.987 "aliases": [ 00:09:12.987 "0f97bd73-4821-4b7e-acc8-3cbe167eb0d6" 00:09:12.987 ], 00:09:12.987 "product_name": "Raid Volume", 00:09:12.987 "block_size": 512, 00:09:12.987 "num_blocks": 63488, 00:09:12.987 "uuid": "0f97bd73-4821-4b7e-acc8-3cbe167eb0d6", 00:09:12.987 "assigned_rate_limits": { 00:09:12.987 "rw_ios_per_sec": 0, 00:09:12.987 "rw_mbytes_per_sec": 0, 00:09:12.987 "r_mbytes_per_sec": 0, 00:09:12.987 "w_mbytes_per_sec": 0 00:09:12.987 }, 00:09:12.987 "claimed": false, 00:09:12.987 "zoned": false, 00:09:12.987 "supported_io_types": { 00:09:12.987 "read": true, 00:09:12.987 "write": true, 00:09:12.987 "unmap": false, 00:09:12.987 "flush": false, 00:09:12.987 "reset": true, 00:09:12.987 "nvme_admin": false, 00:09:12.987 "nvme_io": false, 00:09:12.987 "nvme_io_md": false, 00:09:12.987 "write_zeroes": true, 00:09:12.987 "zcopy": false, 00:09:12.987 "get_zone_info": false, 00:09:12.987 "zone_management": false, 00:09:12.987 "zone_append": false, 00:09:12.987 "compare": false, 00:09:12.987 "compare_and_write": false, 00:09:12.987 "abort": false, 00:09:12.987 "seek_hole": false, 00:09:12.987 "seek_data": false, 00:09:12.987 "copy": false, 00:09:12.987 "nvme_iov_md": false 00:09:12.987 }, 00:09:12.987 "memory_domains": [ 00:09:12.987 { 00:09:12.987 "dma_device_id": "system", 00:09:12.987 "dma_device_type": 1 00:09:12.987 }, 00:09:12.987 { 00:09:12.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.987 "dma_device_type": 2 00:09:12.987 }, 00:09:12.987 { 00:09:12.987 "dma_device_id": "system", 00:09:12.987 "dma_device_type": 1 00:09:12.987 }, 00:09:12.987 { 00:09:12.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.987 "dma_device_type": 2 00:09:12.987 }, 00:09:12.987 { 00:09:12.987 "dma_device_id": "system", 00:09:12.987 "dma_device_type": 1 00:09:12.987 }, 00:09:12.987 { 00:09:12.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.987 "dma_device_type": 2 00:09:12.987 } 00:09:12.987 ], 00:09:12.987 "driver_specific": { 00:09:12.987 "raid": { 00:09:12.987 "uuid": "0f97bd73-4821-4b7e-acc8-3cbe167eb0d6", 00:09:12.987 "strip_size_kb": 0, 00:09:12.987 "state": "online", 00:09:12.987 "raid_level": "raid1", 00:09:12.987 "superblock": true, 00:09:12.987 "num_base_bdevs": 3, 00:09:12.987 "num_base_bdevs_discovered": 3, 00:09:12.987 "num_base_bdevs_operational": 3, 00:09:12.987 "base_bdevs_list": [ 00:09:12.987 { 00:09:12.987 "name": "NewBaseBdev", 00:09:12.987 "uuid": "59cc6204-4111-4aa9-a73c-0353eca425af", 00:09:12.987 "is_configured": true, 00:09:12.987 "data_offset": 2048, 00:09:12.987 "data_size": 63488 00:09:12.987 }, 00:09:12.987 { 00:09:12.987 "name": "BaseBdev2", 00:09:12.987 "uuid": "274551aa-faec-40fb-a165-950b0481261c", 00:09:12.987 "is_configured": true, 00:09:12.987 "data_offset": 2048, 00:09:12.987 "data_size": 63488 00:09:12.987 }, 00:09:12.987 { 00:09:12.987 "name": "BaseBdev3", 00:09:12.987 "uuid": "76ee1f1e-875c-4d42-bff7-c474fda75529", 00:09:12.987 "is_configured": true, 00:09:12.987 "data_offset": 2048, 00:09:12.987 "data_size": 63488 00:09:12.987 } 00:09:12.987 ] 00:09:12.987 } 00:09:12.987 } 00:09:12.987 }' 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:12.987 BaseBdev2 00:09:12.987 BaseBdev3' 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.987 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.246 [2024-11-26 13:22:01.643723] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.246 [2024-11-26 13:22:01.643848] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.246 [2024-11-26 13:22:01.643918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.246 [2024-11-26 13:22:01.644207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.246 [2024-11-26 13:22:01.644221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67545 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67545 ']' 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67545 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67545 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67545' 00:09:13.246 killing process with pid 67545 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67545 00:09:13.246 [2024-11-26 13:22:01.685132] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.246 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67545 00:09:13.505 [2024-11-26 13:22:01.885449] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.443 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:14.443 00:09:14.443 real 0m11.429s 00:09:14.443 user 0m19.336s 00:09:14.443 sys 0m1.570s 00:09:14.443 ************************************ 00:09:14.443 END TEST raid_state_function_test_sb 00:09:14.443 ************************************ 00:09:14.443 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.443 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.443 13:22:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:14.443 13:22:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:14.443 13:22:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.443 13:22:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:14.443 ************************************ 00:09:14.443 START TEST raid_superblock_test 00:09:14.443 ************************************ 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68177 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68177 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:14.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68177 ']' 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.443 13:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.444 13:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.444 13:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.444 [2024-11-26 13:22:02.891778] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:09:14.444 [2024-11-26 13:22:02.891963] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68177 ] 00:09:14.702 [2024-11-26 13:22:03.082186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.702 [2024-11-26 13:22:03.218919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.961 [2024-11-26 13:22:03.389944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.961 [2024-11-26 13:22:03.390007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.530 malloc1 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.530 [2024-11-26 13:22:03.866273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:15.530 [2024-11-26 13:22:03.866347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.530 [2024-11-26 13:22:03.866378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:15.530 [2024-11-26 13:22:03.866392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.530 [2024-11-26 13:22:03.868674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.530 [2024-11-26 13:22:03.868955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:15.530 pt1 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.530 malloc2 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.530 [2024-11-26 13:22:03.911932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:15.530 [2024-11-26 13:22:03.911987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.530 [2024-11-26 13:22:03.912014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:15.530 [2024-11-26 13:22:03.912027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.530 [2024-11-26 13:22:03.914310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.530 [2024-11-26 13:22:03.914492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:15.530 pt2 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.530 malloc3 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.530 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.531 [2024-11-26 13:22:03.965904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:15.531 [2024-11-26 13:22:03.965957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.531 [2024-11-26 13:22:03.965984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:15.531 [2024-11-26 13:22:03.965998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.531 [2024-11-26 13:22:03.968264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.531 [2024-11-26 13:22:03.968303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:15.531 pt3 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.531 [2024-11-26 13:22:03.977965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:15.531 [2024-11-26 13:22:03.980189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:15.531 [2024-11-26 13:22:03.980292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:15.531 [2024-11-26 13:22:03.980483] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:15.531 [2024-11-26 13:22:03.980508] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:15.531 [2024-11-26 13:22:03.980760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:15.531 [2024-11-26 13:22:03.980948] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:15.531 [2024-11-26 13:22:03.980965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:15.531 [2024-11-26 13:22:03.981117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.531 13:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.531 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.531 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.531 "name": "raid_bdev1", 00:09:15.531 "uuid": "79047c2b-840c-44f8-b2b9-e6ed620df7c1", 00:09:15.531 "strip_size_kb": 0, 00:09:15.531 "state": "online", 00:09:15.531 "raid_level": "raid1", 00:09:15.531 "superblock": true, 00:09:15.531 "num_base_bdevs": 3, 00:09:15.531 "num_base_bdevs_discovered": 3, 00:09:15.531 "num_base_bdevs_operational": 3, 00:09:15.531 "base_bdevs_list": [ 00:09:15.531 { 00:09:15.531 "name": "pt1", 00:09:15.531 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:15.531 "is_configured": true, 00:09:15.531 "data_offset": 2048, 00:09:15.531 "data_size": 63488 00:09:15.531 }, 00:09:15.531 { 00:09:15.531 "name": "pt2", 00:09:15.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.531 "is_configured": true, 00:09:15.531 "data_offset": 2048, 00:09:15.531 "data_size": 63488 00:09:15.531 }, 00:09:15.531 { 00:09:15.531 "name": "pt3", 00:09:15.531 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:15.531 "is_configured": true, 00:09:15.531 "data_offset": 2048, 00:09:15.531 "data_size": 63488 00:09:15.531 } 00:09:15.531 ] 00:09:15.531 }' 00:09:15.531 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.531 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.100 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:16.100 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:16.100 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.100 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.100 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.100 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.100 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:16.100 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.100 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.100 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.100 [2024-11-26 13:22:04.506366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.100 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.100 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.100 "name": "raid_bdev1", 00:09:16.100 "aliases": [ 00:09:16.100 "79047c2b-840c-44f8-b2b9-e6ed620df7c1" 00:09:16.100 ], 00:09:16.100 "product_name": "Raid Volume", 00:09:16.100 "block_size": 512, 00:09:16.100 "num_blocks": 63488, 00:09:16.100 "uuid": "79047c2b-840c-44f8-b2b9-e6ed620df7c1", 00:09:16.100 "assigned_rate_limits": { 00:09:16.100 "rw_ios_per_sec": 0, 00:09:16.100 "rw_mbytes_per_sec": 0, 00:09:16.100 "r_mbytes_per_sec": 0, 00:09:16.100 "w_mbytes_per_sec": 0 00:09:16.100 }, 00:09:16.100 "claimed": false, 00:09:16.100 "zoned": false, 00:09:16.100 "supported_io_types": { 00:09:16.100 "read": true, 00:09:16.100 "write": true, 00:09:16.100 "unmap": false, 00:09:16.100 "flush": false, 00:09:16.100 "reset": true, 00:09:16.100 "nvme_admin": false, 00:09:16.100 "nvme_io": false, 00:09:16.100 "nvme_io_md": false, 00:09:16.100 "write_zeroes": true, 00:09:16.100 "zcopy": false, 00:09:16.100 "get_zone_info": false, 00:09:16.100 "zone_management": false, 00:09:16.100 "zone_append": false, 00:09:16.100 "compare": false, 00:09:16.100 "compare_and_write": false, 00:09:16.100 "abort": false, 00:09:16.100 "seek_hole": false, 00:09:16.100 "seek_data": false, 00:09:16.100 "copy": false, 00:09:16.100 "nvme_iov_md": false 00:09:16.100 }, 00:09:16.100 "memory_domains": [ 00:09:16.100 { 00:09:16.100 "dma_device_id": "system", 00:09:16.100 "dma_device_type": 1 00:09:16.100 }, 00:09:16.100 { 00:09:16.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.100 "dma_device_type": 2 00:09:16.100 }, 00:09:16.100 { 00:09:16.100 "dma_device_id": "system", 00:09:16.100 "dma_device_type": 1 00:09:16.100 }, 00:09:16.100 { 00:09:16.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.100 "dma_device_type": 2 00:09:16.100 }, 00:09:16.100 { 00:09:16.100 "dma_device_id": "system", 00:09:16.100 "dma_device_type": 1 00:09:16.100 }, 00:09:16.100 { 00:09:16.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.100 "dma_device_type": 2 00:09:16.100 } 00:09:16.100 ], 00:09:16.100 "driver_specific": { 00:09:16.100 "raid": { 00:09:16.100 "uuid": "79047c2b-840c-44f8-b2b9-e6ed620df7c1", 00:09:16.100 "strip_size_kb": 0, 00:09:16.100 "state": "online", 00:09:16.100 "raid_level": "raid1", 00:09:16.100 "superblock": true, 00:09:16.100 "num_base_bdevs": 3, 00:09:16.100 "num_base_bdevs_discovered": 3, 00:09:16.100 "num_base_bdevs_operational": 3, 00:09:16.100 "base_bdevs_list": [ 00:09:16.100 { 00:09:16.100 "name": "pt1", 00:09:16.100 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.100 "is_configured": true, 00:09:16.100 "data_offset": 2048, 00:09:16.100 "data_size": 63488 00:09:16.100 }, 00:09:16.100 { 00:09:16.100 "name": "pt2", 00:09:16.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.100 "is_configured": true, 00:09:16.100 "data_offset": 2048, 00:09:16.100 "data_size": 63488 00:09:16.100 }, 00:09:16.100 { 00:09:16.100 "name": "pt3", 00:09:16.100 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.100 "is_configured": true, 00:09:16.100 "data_offset": 2048, 00:09:16.100 "data_size": 63488 00:09:16.100 } 00:09:16.100 ] 00:09:16.100 } 00:09:16.100 } 00:09:16.100 }' 00:09:16.101 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.101 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:16.101 pt2 00:09:16.101 pt3' 00:09:16.101 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.101 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.101 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.101 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.101 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:16.101 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.101 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.360 [2024-11-26 13:22:04.818393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=79047c2b-840c-44f8-b2b9-e6ed620df7c1 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 79047c2b-840c-44f8-b2b9-e6ed620df7c1 ']' 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.360 [2024-11-26 13:22:04.870088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:16.360 [2024-11-26 13:22:04.870118] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.360 [2024-11-26 13:22:04.870181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.360 [2024-11-26 13:22:04.870315] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.360 [2024-11-26 13:22:04.870333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.360 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:16.620 13:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.620 [2024-11-26 13:22:05.018160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:16.620 [2024-11-26 13:22:05.020377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:16.620 [2024-11-26 13:22:05.020441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:16.620 [2024-11-26 13:22:05.020497] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:16.620 [2024-11-26 13:22:05.020555] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:16.620 [2024-11-26 13:22:05.020601] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:16.620 [2024-11-26 13:22:05.020624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:16.620 [2024-11-26 13:22:05.020634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:16.620 request: 00:09:16.620 { 00:09:16.620 "name": "raid_bdev1", 00:09:16.620 "raid_level": "raid1", 00:09:16.620 "base_bdevs": [ 00:09:16.620 "malloc1", 00:09:16.620 "malloc2", 00:09:16.620 "malloc3" 00:09:16.620 ], 00:09:16.620 "superblock": false, 00:09:16.620 "method": "bdev_raid_create", 00:09:16.620 "req_id": 1 00:09:16.620 } 00:09:16.620 Got JSON-RPC error response 00:09:16.620 response: 00:09:16.620 { 00:09:16.620 "code": -17, 00:09:16.620 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:16.620 } 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.620 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.620 [2024-11-26 13:22:05.086132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:16.620 [2024-11-26 13:22:05.086187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.620 [2024-11-26 13:22:05.086214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:16.620 [2024-11-26 13:22:05.086226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.620 [2024-11-26 13:22:05.088541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.620 [2024-11-26 13:22:05.088581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:16.620 [2024-11-26 13:22:05.088652] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:16.621 [2024-11-26 13:22:05.088703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:16.621 pt1 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.621 "name": "raid_bdev1", 00:09:16.621 "uuid": "79047c2b-840c-44f8-b2b9-e6ed620df7c1", 00:09:16.621 "strip_size_kb": 0, 00:09:16.621 "state": "configuring", 00:09:16.621 "raid_level": "raid1", 00:09:16.621 "superblock": true, 00:09:16.621 "num_base_bdevs": 3, 00:09:16.621 "num_base_bdevs_discovered": 1, 00:09:16.621 "num_base_bdevs_operational": 3, 00:09:16.621 "base_bdevs_list": [ 00:09:16.621 { 00:09:16.621 "name": "pt1", 00:09:16.621 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.621 "is_configured": true, 00:09:16.621 "data_offset": 2048, 00:09:16.621 "data_size": 63488 00:09:16.621 }, 00:09:16.621 { 00:09:16.621 "name": null, 00:09:16.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.621 "is_configured": false, 00:09:16.621 "data_offset": 2048, 00:09:16.621 "data_size": 63488 00:09:16.621 }, 00:09:16.621 { 00:09:16.621 "name": null, 00:09:16.621 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.621 "is_configured": false, 00:09:16.621 "data_offset": 2048, 00:09:16.621 "data_size": 63488 00:09:16.621 } 00:09:16.621 ] 00:09:16.621 }' 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.621 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.189 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:17.189 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:17.189 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.189 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.189 [2024-11-26 13:22:05.610235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:17.189 [2024-11-26 13:22:05.610323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.189 [2024-11-26 13:22:05.610349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:17.189 [2024-11-26 13:22:05.610362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.189 [2024-11-26 13:22:05.610745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.190 [2024-11-26 13:22:05.610767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:17.190 [2024-11-26 13:22:05.610835] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:17.190 [2024-11-26 13:22:05.610858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:17.190 pt2 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.190 [2024-11-26 13:22:05.618303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.190 "name": "raid_bdev1", 00:09:17.190 "uuid": "79047c2b-840c-44f8-b2b9-e6ed620df7c1", 00:09:17.190 "strip_size_kb": 0, 00:09:17.190 "state": "configuring", 00:09:17.190 "raid_level": "raid1", 00:09:17.190 "superblock": true, 00:09:17.190 "num_base_bdevs": 3, 00:09:17.190 "num_base_bdevs_discovered": 1, 00:09:17.190 "num_base_bdevs_operational": 3, 00:09:17.190 "base_bdevs_list": [ 00:09:17.190 { 00:09:17.190 "name": "pt1", 00:09:17.190 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.190 "is_configured": true, 00:09:17.190 "data_offset": 2048, 00:09:17.190 "data_size": 63488 00:09:17.190 }, 00:09:17.190 { 00:09:17.190 "name": null, 00:09:17.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.190 "is_configured": false, 00:09:17.190 "data_offset": 0, 00:09:17.190 "data_size": 63488 00:09:17.190 }, 00:09:17.190 { 00:09:17.190 "name": null, 00:09:17.190 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.190 "is_configured": false, 00:09:17.190 "data_offset": 2048, 00:09:17.190 "data_size": 63488 00:09:17.190 } 00:09:17.190 ] 00:09:17.190 }' 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.190 13:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.757 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:17.757 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:17.757 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:17.757 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.757 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.757 [2024-11-26 13:22:06.150395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:17.757 [2024-11-26 13:22:06.150626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.757 [2024-11-26 13:22:06.150656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:17.757 [2024-11-26 13:22:06.150671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.757 [2024-11-26 13:22:06.151087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.757 [2024-11-26 13:22:06.151120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:17.757 [2024-11-26 13:22:06.151185] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:17.757 [2024-11-26 13:22:06.151227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:17.757 pt2 00:09:17.757 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.757 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:17.757 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:17.757 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:17.757 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.757 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.757 [2024-11-26 13:22:06.158408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:17.757 [2024-11-26 13:22:06.158459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.757 [2024-11-26 13:22:06.158485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:17.757 [2024-11-26 13:22:06.158501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.757 [2024-11-26 13:22:06.158873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.757 [2024-11-26 13:22:06.158909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:17.757 [2024-11-26 13:22:06.158971] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:17.757 [2024-11-26 13:22:06.158999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:17.757 [2024-11-26 13:22:06.159121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:17.757 [2024-11-26 13:22:06.159142] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:17.757 [2024-11-26 13:22:06.159423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:17.757 [2024-11-26 13:22:06.159615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:17.757 [2024-11-26 13:22:06.159629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:17.757 [2024-11-26 13:22:06.159765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.757 pt3 00:09:17.757 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.758 "name": "raid_bdev1", 00:09:17.758 "uuid": "79047c2b-840c-44f8-b2b9-e6ed620df7c1", 00:09:17.758 "strip_size_kb": 0, 00:09:17.758 "state": "online", 00:09:17.758 "raid_level": "raid1", 00:09:17.758 "superblock": true, 00:09:17.758 "num_base_bdevs": 3, 00:09:17.758 "num_base_bdevs_discovered": 3, 00:09:17.758 "num_base_bdevs_operational": 3, 00:09:17.758 "base_bdevs_list": [ 00:09:17.758 { 00:09:17.758 "name": "pt1", 00:09:17.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.758 "is_configured": true, 00:09:17.758 "data_offset": 2048, 00:09:17.758 "data_size": 63488 00:09:17.758 }, 00:09:17.758 { 00:09:17.758 "name": "pt2", 00:09:17.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.758 "is_configured": true, 00:09:17.758 "data_offset": 2048, 00:09:17.758 "data_size": 63488 00:09:17.758 }, 00:09:17.758 { 00:09:17.758 "name": "pt3", 00:09:17.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.758 "is_configured": true, 00:09:17.758 "data_offset": 2048, 00:09:17.758 "data_size": 63488 00:09:17.758 } 00:09:17.758 ] 00:09:17.758 }' 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.758 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.327 [2024-11-26 13:22:06.694845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.327 "name": "raid_bdev1", 00:09:18.327 "aliases": [ 00:09:18.327 "79047c2b-840c-44f8-b2b9-e6ed620df7c1" 00:09:18.327 ], 00:09:18.327 "product_name": "Raid Volume", 00:09:18.327 "block_size": 512, 00:09:18.327 "num_blocks": 63488, 00:09:18.327 "uuid": "79047c2b-840c-44f8-b2b9-e6ed620df7c1", 00:09:18.327 "assigned_rate_limits": { 00:09:18.327 "rw_ios_per_sec": 0, 00:09:18.327 "rw_mbytes_per_sec": 0, 00:09:18.327 "r_mbytes_per_sec": 0, 00:09:18.327 "w_mbytes_per_sec": 0 00:09:18.327 }, 00:09:18.327 "claimed": false, 00:09:18.327 "zoned": false, 00:09:18.327 "supported_io_types": { 00:09:18.327 "read": true, 00:09:18.327 "write": true, 00:09:18.327 "unmap": false, 00:09:18.327 "flush": false, 00:09:18.327 "reset": true, 00:09:18.327 "nvme_admin": false, 00:09:18.327 "nvme_io": false, 00:09:18.327 "nvme_io_md": false, 00:09:18.327 "write_zeroes": true, 00:09:18.327 "zcopy": false, 00:09:18.327 "get_zone_info": false, 00:09:18.327 "zone_management": false, 00:09:18.327 "zone_append": false, 00:09:18.327 "compare": false, 00:09:18.327 "compare_and_write": false, 00:09:18.327 "abort": false, 00:09:18.327 "seek_hole": false, 00:09:18.327 "seek_data": false, 00:09:18.327 "copy": false, 00:09:18.327 "nvme_iov_md": false 00:09:18.327 }, 00:09:18.327 "memory_domains": [ 00:09:18.327 { 00:09:18.327 "dma_device_id": "system", 00:09:18.327 "dma_device_type": 1 00:09:18.327 }, 00:09:18.327 { 00:09:18.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.327 "dma_device_type": 2 00:09:18.327 }, 00:09:18.327 { 00:09:18.327 "dma_device_id": "system", 00:09:18.327 "dma_device_type": 1 00:09:18.327 }, 00:09:18.327 { 00:09:18.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.327 "dma_device_type": 2 00:09:18.327 }, 00:09:18.327 { 00:09:18.327 "dma_device_id": "system", 00:09:18.327 "dma_device_type": 1 00:09:18.327 }, 00:09:18.327 { 00:09:18.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.327 "dma_device_type": 2 00:09:18.327 } 00:09:18.327 ], 00:09:18.327 "driver_specific": { 00:09:18.327 "raid": { 00:09:18.327 "uuid": "79047c2b-840c-44f8-b2b9-e6ed620df7c1", 00:09:18.327 "strip_size_kb": 0, 00:09:18.327 "state": "online", 00:09:18.327 "raid_level": "raid1", 00:09:18.327 "superblock": true, 00:09:18.327 "num_base_bdevs": 3, 00:09:18.327 "num_base_bdevs_discovered": 3, 00:09:18.327 "num_base_bdevs_operational": 3, 00:09:18.327 "base_bdevs_list": [ 00:09:18.327 { 00:09:18.327 "name": "pt1", 00:09:18.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.327 "is_configured": true, 00:09:18.327 "data_offset": 2048, 00:09:18.327 "data_size": 63488 00:09:18.327 }, 00:09:18.327 { 00:09:18.327 "name": "pt2", 00:09:18.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.327 "is_configured": true, 00:09:18.327 "data_offset": 2048, 00:09:18.327 "data_size": 63488 00:09:18.327 }, 00:09:18.327 { 00:09:18.327 "name": "pt3", 00:09:18.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.327 "is_configured": true, 00:09:18.327 "data_offset": 2048, 00:09:18.327 "data_size": 63488 00:09:18.327 } 00:09:18.327 ] 00:09:18.327 } 00:09:18.327 } 00:09:18.327 }' 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:18.327 pt2 00:09:18.327 pt3' 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.327 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.587 13:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:18.587 [2024-11-26 13:22:07.014954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 79047c2b-840c-44f8-b2b9-e6ed620df7c1 '!=' 79047c2b-840c-44f8-b2b9-e6ed620df7c1 ']' 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.587 [2024-11-26 13:22:07.070664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.587 "name": "raid_bdev1", 00:09:18.587 "uuid": "79047c2b-840c-44f8-b2b9-e6ed620df7c1", 00:09:18.587 "strip_size_kb": 0, 00:09:18.587 "state": "online", 00:09:18.587 "raid_level": "raid1", 00:09:18.587 "superblock": true, 00:09:18.587 "num_base_bdevs": 3, 00:09:18.587 "num_base_bdevs_discovered": 2, 00:09:18.587 "num_base_bdevs_operational": 2, 00:09:18.587 "base_bdevs_list": [ 00:09:18.587 { 00:09:18.587 "name": null, 00:09:18.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.587 "is_configured": false, 00:09:18.587 "data_offset": 0, 00:09:18.587 "data_size": 63488 00:09:18.587 }, 00:09:18.587 { 00:09:18.587 "name": "pt2", 00:09:18.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.587 "is_configured": true, 00:09:18.587 "data_offset": 2048, 00:09:18.587 "data_size": 63488 00:09:18.587 }, 00:09:18.587 { 00:09:18.587 "name": "pt3", 00:09:18.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.587 "is_configured": true, 00:09:18.587 "data_offset": 2048, 00:09:18.587 "data_size": 63488 00:09:18.587 } 00:09:18.587 ] 00:09:18.587 }' 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.587 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.155 [2024-11-26 13:22:07.606789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.155 [2024-11-26 13:22:07.606958] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.155 [2024-11-26 13:22:07.607031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.155 [2024-11-26 13:22:07.607089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.155 [2024-11-26 13:22:07.607109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.155 [2024-11-26 13:22:07.686766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.155 [2024-11-26 13:22:07.686971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.155 [2024-11-26 13:22:07.687012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:19.155 [2024-11-26 13:22:07.687027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.155 [2024-11-26 13:22:07.689340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.155 [2024-11-26 13:22:07.689383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.155 [2024-11-26 13:22:07.689451] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:19.155 [2024-11-26 13:22:07.689497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.155 pt2 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.155 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.415 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.415 "name": "raid_bdev1", 00:09:19.415 "uuid": "79047c2b-840c-44f8-b2b9-e6ed620df7c1", 00:09:19.415 "strip_size_kb": 0, 00:09:19.415 "state": "configuring", 00:09:19.415 "raid_level": "raid1", 00:09:19.415 "superblock": true, 00:09:19.415 "num_base_bdevs": 3, 00:09:19.415 "num_base_bdevs_discovered": 1, 00:09:19.415 "num_base_bdevs_operational": 2, 00:09:19.415 "base_bdevs_list": [ 00:09:19.415 { 00:09:19.415 "name": null, 00:09:19.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.415 "is_configured": false, 00:09:19.415 "data_offset": 2048, 00:09:19.415 "data_size": 63488 00:09:19.415 }, 00:09:19.415 { 00:09:19.415 "name": "pt2", 00:09:19.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.415 "is_configured": true, 00:09:19.415 "data_offset": 2048, 00:09:19.415 "data_size": 63488 00:09:19.415 }, 00:09:19.415 { 00:09:19.415 "name": null, 00:09:19.415 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.415 "is_configured": false, 00:09:19.415 "data_offset": 2048, 00:09:19.415 "data_size": 63488 00:09:19.415 } 00:09:19.415 ] 00:09:19.415 }' 00:09:19.415 13:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.415 13:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.674 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:19.674 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:19.674 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.675 [2024-11-26 13:22:08.214865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:19.675 [2024-11-26 13:22:08.214918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.675 [2024-11-26 13:22:08.214939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:19.675 [2024-11-26 13:22:08.214952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.675 [2024-11-26 13:22:08.215352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.675 [2024-11-26 13:22:08.215381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:19.675 [2024-11-26 13:22:08.215456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:19.675 [2024-11-26 13:22:08.215488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:19.675 [2024-11-26 13:22:08.215614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:19.675 [2024-11-26 13:22:08.215632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:19.675 [2024-11-26 13:22:08.215886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:19.675 [2024-11-26 13:22:08.216043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:19.675 [2024-11-26 13:22:08.216056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:19.675 [2024-11-26 13:22:08.216187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.675 pt3 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.675 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.934 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.934 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.934 "name": "raid_bdev1", 00:09:19.934 "uuid": "79047c2b-840c-44f8-b2b9-e6ed620df7c1", 00:09:19.934 "strip_size_kb": 0, 00:09:19.934 "state": "online", 00:09:19.934 "raid_level": "raid1", 00:09:19.934 "superblock": true, 00:09:19.934 "num_base_bdevs": 3, 00:09:19.934 "num_base_bdevs_discovered": 2, 00:09:19.934 "num_base_bdevs_operational": 2, 00:09:19.934 "base_bdevs_list": [ 00:09:19.934 { 00:09:19.934 "name": null, 00:09:19.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.934 "is_configured": false, 00:09:19.934 "data_offset": 2048, 00:09:19.934 "data_size": 63488 00:09:19.934 }, 00:09:19.934 { 00:09:19.934 "name": "pt2", 00:09:19.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.934 "is_configured": true, 00:09:19.935 "data_offset": 2048, 00:09:19.935 "data_size": 63488 00:09:19.935 }, 00:09:19.935 { 00:09:19.935 "name": "pt3", 00:09:19.935 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.935 "is_configured": true, 00:09:19.935 "data_offset": 2048, 00:09:19.935 "data_size": 63488 00:09:19.935 } 00:09:19.935 ] 00:09:19.935 }' 00:09:19.935 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.935 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.194 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.194 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.194 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.194 [2024-11-26 13:22:08.746952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.194 [2024-11-26 13:22:08.747114] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.194 [2024-11-26 13:22:08.747183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.194 [2024-11-26 13:22:08.747280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.194 [2024-11-26 13:22:08.747296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:20.194 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.194 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.194 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:20.194 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.194 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.453 [2024-11-26 13:22:08.818996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:20.453 [2024-11-26 13:22:08.819047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.453 [2024-11-26 13:22:08.819072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:20.453 [2024-11-26 13:22:08.819084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.453 [2024-11-26 13:22:08.821375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.453 [2024-11-26 13:22:08.821566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:20.453 [2024-11-26 13:22:08.821671] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:20.453 [2024-11-26 13:22:08.821716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:20.453 [2024-11-26 13:22:08.821855] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:20.453 [2024-11-26 13:22:08.821871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.453 [2024-11-26 13:22:08.821905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:20.453 [2024-11-26 13:22:08.821960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:20.453 pt1 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.453 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.454 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.454 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.454 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.454 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.454 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.454 "name": "raid_bdev1", 00:09:20.454 "uuid": "79047c2b-840c-44f8-b2b9-e6ed620df7c1", 00:09:20.454 "strip_size_kb": 0, 00:09:20.454 "state": "configuring", 00:09:20.454 "raid_level": "raid1", 00:09:20.454 "superblock": true, 00:09:20.454 "num_base_bdevs": 3, 00:09:20.454 "num_base_bdevs_discovered": 1, 00:09:20.454 "num_base_bdevs_operational": 2, 00:09:20.454 "base_bdevs_list": [ 00:09:20.454 { 00:09:20.454 "name": null, 00:09:20.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.454 "is_configured": false, 00:09:20.454 "data_offset": 2048, 00:09:20.454 "data_size": 63488 00:09:20.454 }, 00:09:20.454 { 00:09:20.454 "name": "pt2", 00:09:20.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.454 "is_configured": true, 00:09:20.454 "data_offset": 2048, 00:09:20.454 "data_size": 63488 00:09:20.454 }, 00:09:20.454 { 00:09:20.454 "name": null, 00:09:20.454 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.454 "is_configured": false, 00:09:20.454 "data_offset": 2048, 00:09:20.454 "data_size": 63488 00:09:20.454 } 00:09:20.454 ] 00:09:20.454 }' 00:09:20.454 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.454 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.022 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.023 [2024-11-26 13:22:09.395118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:21.023 [2024-11-26 13:22:09.395312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.023 [2024-11-26 13:22:09.395348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:21.023 [2024-11-26 13:22:09.395362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.023 [2024-11-26 13:22:09.395811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.023 [2024-11-26 13:22:09.395839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:21.023 [2024-11-26 13:22:09.395907] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:21.023 [2024-11-26 13:22:09.395955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:21.023 [2024-11-26 13:22:09.396074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:21.023 [2024-11-26 13:22:09.396087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:21.023 [2024-11-26 13:22:09.396353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:21.023 [2024-11-26 13:22:09.396539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:21.023 [2024-11-26 13:22:09.396565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:21.023 [2024-11-26 13:22:09.396712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.023 pt3 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.023 "name": "raid_bdev1", 00:09:21.023 "uuid": "79047c2b-840c-44f8-b2b9-e6ed620df7c1", 00:09:21.023 "strip_size_kb": 0, 00:09:21.023 "state": "online", 00:09:21.023 "raid_level": "raid1", 00:09:21.023 "superblock": true, 00:09:21.023 "num_base_bdevs": 3, 00:09:21.023 "num_base_bdevs_discovered": 2, 00:09:21.023 "num_base_bdevs_operational": 2, 00:09:21.023 "base_bdevs_list": [ 00:09:21.023 { 00:09:21.023 "name": null, 00:09:21.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.023 "is_configured": false, 00:09:21.023 "data_offset": 2048, 00:09:21.023 "data_size": 63488 00:09:21.023 }, 00:09:21.023 { 00:09:21.023 "name": "pt2", 00:09:21.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.023 "is_configured": true, 00:09:21.023 "data_offset": 2048, 00:09:21.023 "data_size": 63488 00:09:21.023 }, 00:09:21.023 { 00:09:21.023 "name": "pt3", 00:09:21.023 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.023 "is_configured": true, 00:09:21.023 "data_offset": 2048, 00:09:21.023 "data_size": 63488 00:09:21.023 } 00:09:21.023 ] 00:09:21.023 }' 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.023 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.590 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:21.591 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.591 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.591 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:21.591 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.591 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:21.591 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.591 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:21.591 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.591 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.591 [2024-11-26 13:22:09.975507] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.591 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.591 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 79047c2b-840c-44f8-b2b9-e6ed620df7c1 '!=' 79047c2b-840c-44f8-b2b9-e6ed620df7c1 ']' 00:09:21.591 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68177 00:09:21.591 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68177 ']' 00:09:21.591 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68177 00:09:21.591 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:21.591 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.591 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68177 00:09:21.591 killing process with pid 68177 00:09:21.591 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.591 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.591 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68177' 00:09:21.591 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68177 00:09:21.591 [2024-11-26 13:22:10.052538] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.591 [2024-11-26 13:22:10.052600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.591 [2024-11-26 13:22:10.052653] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.591 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68177 00:09:21.591 [2024-11-26 13:22:10.052669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:21.850 [2024-11-26 13:22:10.256040] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.788 ************************************ 00:09:22.788 END TEST raid_superblock_test 00:09:22.788 ************************************ 00:09:22.788 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:22.788 00:09:22.788 real 0m8.308s 00:09:22.788 user 0m13.861s 00:09:22.788 sys 0m1.159s 00:09:22.788 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.788 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.788 13:22:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:22.788 13:22:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:22.788 13:22:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.788 13:22:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:22.788 ************************************ 00:09:22.788 START TEST raid_read_error_test 00:09:22.788 ************************************ 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Pv95zfrqPA 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68628 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68628 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68628 ']' 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.788 13:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.788 [2024-11-26 13:22:11.268940] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:09:22.788 [2024-11-26 13:22:11.269116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68628 ] 00:09:23.047 [2024-11-26 13:22:11.452391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.047 [2024-11-26 13:22:11.552121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.306 [2024-11-26 13:22:11.720105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.306 [2024-11-26 13:22:11.720170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.874 BaseBdev1_malloc 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.874 true 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.874 [2024-11-26 13:22:12.198325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:23.874 [2024-11-26 13:22:12.198390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.874 [2024-11-26 13:22:12.198415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:23.874 [2024-11-26 13:22:12.198431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.874 [2024-11-26 13:22:12.200732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.874 [2024-11-26 13:22:12.200775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:23.874 BaseBdev1 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.874 BaseBdev2_malloc 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.874 true 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.874 [2024-11-26 13:22:12.252121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:23.874 [2024-11-26 13:22:12.252428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.874 [2024-11-26 13:22:12.252458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:23.874 [2024-11-26 13:22:12.252475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.874 [2024-11-26 13:22:12.254761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.874 [2024-11-26 13:22:12.254806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:23.874 BaseBdev2 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.874 BaseBdev3_malloc 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.874 true 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.874 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.875 [2024-11-26 13:22:12.317079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:23.875 [2024-11-26 13:22:12.317134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.875 [2024-11-26 13:22:12.317157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:23.875 [2024-11-26 13:22:12.317173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.875 [2024-11-26 13:22:12.319503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.875 [2024-11-26 13:22:12.319548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:23.875 BaseBdev3 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.875 [2024-11-26 13:22:12.329161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.875 [2024-11-26 13:22:12.331165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.875 [2024-11-26 13:22:12.331272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.875 [2024-11-26 13:22:12.331505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:23.875 [2024-11-26 13:22:12.331530] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:23.875 [2024-11-26 13:22:12.331790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:23.875 [2024-11-26 13:22:12.331998] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:23.875 [2024-11-26 13:22:12.332024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:23.875 [2024-11-26 13:22:12.332180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.875 "name": "raid_bdev1", 00:09:23.875 "uuid": "3d38ec97-29e1-4b7b-bb38-38bdd9e96131", 00:09:23.875 "strip_size_kb": 0, 00:09:23.875 "state": "online", 00:09:23.875 "raid_level": "raid1", 00:09:23.875 "superblock": true, 00:09:23.875 "num_base_bdevs": 3, 00:09:23.875 "num_base_bdevs_discovered": 3, 00:09:23.875 "num_base_bdevs_operational": 3, 00:09:23.875 "base_bdevs_list": [ 00:09:23.875 { 00:09:23.875 "name": "BaseBdev1", 00:09:23.875 "uuid": "4ad90bfd-dd4a-54c0-b06b-f6bed97d201d", 00:09:23.875 "is_configured": true, 00:09:23.875 "data_offset": 2048, 00:09:23.875 "data_size": 63488 00:09:23.875 }, 00:09:23.875 { 00:09:23.875 "name": "BaseBdev2", 00:09:23.875 "uuid": "8eff8a2c-c4e3-51ac-9992-d55619be4aea", 00:09:23.875 "is_configured": true, 00:09:23.875 "data_offset": 2048, 00:09:23.875 "data_size": 63488 00:09:23.875 }, 00:09:23.875 { 00:09:23.875 "name": "BaseBdev3", 00:09:23.875 "uuid": "64a12fa8-9dc7-58fb-9c04-16a7dcf7192d", 00:09:23.875 "is_configured": true, 00:09:23.875 "data_offset": 2048, 00:09:23.875 "data_size": 63488 00:09:23.875 } 00:09:23.875 ] 00:09:23.875 }' 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.875 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.443 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:24.443 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:24.443 [2024-11-26 13:22:12.942358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.379 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.379 "name": "raid_bdev1", 00:09:25.379 "uuid": "3d38ec97-29e1-4b7b-bb38-38bdd9e96131", 00:09:25.379 "strip_size_kb": 0, 00:09:25.379 "state": "online", 00:09:25.379 "raid_level": "raid1", 00:09:25.379 "superblock": true, 00:09:25.379 "num_base_bdevs": 3, 00:09:25.379 "num_base_bdevs_discovered": 3, 00:09:25.380 "num_base_bdevs_operational": 3, 00:09:25.380 "base_bdevs_list": [ 00:09:25.380 { 00:09:25.380 "name": "BaseBdev1", 00:09:25.380 "uuid": "4ad90bfd-dd4a-54c0-b06b-f6bed97d201d", 00:09:25.380 "is_configured": true, 00:09:25.380 "data_offset": 2048, 00:09:25.380 "data_size": 63488 00:09:25.380 }, 00:09:25.380 { 00:09:25.380 "name": "BaseBdev2", 00:09:25.380 "uuid": "8eff8a2c-c4e3-51ac-9992-d55619be4aea", 00:09:25.380 "is_configured": true, 00:09:25.380 "data_offset": 2048, 00:09:25.380 "data_size": 63488 00:09:25.380 }, 00:09:25.380 { 00:09:25.380 "name": "BaseBdev3", 00:09:25.380 "uuid": "64a12fa8-9dc7-58fb-9c04-16a7dcf7192d", 00:09:25.380 "is_configured": true, 00:09:25.380 "data_offset": 2048, 00:09:25.380 "data_size": 63488 00:09:25.380 } 00:09:25.380 ] 00:09:25.380 }' 00:09:25.380 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.380 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.947 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:25.947 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.947 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.947 [2024-11-26 13:22:14.368277] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.947 [2024-11-26 13:22:14.368546] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.947 [2024-11-26 13:22:14.371433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.947 [2024-11-26 13:22:14.371487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.947 [2024-11-26 13:22:14.371601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.947 [2024-11-26 13:22:14.371619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:25.947 { 00:09:25.947 "results": [ 00:09:25.947 { 00:09:25.947 "job": "raid_bdev1", 00:09:25.947 "core_mask": "0x1", 00:09:25.947 "workload": "randrw", 00:09:25.947 "percentage": 50, 00:09:25.947 "status": "finished", 00:09:25.947 "queue_depth": 1, 00:09:25.947 "io_size": 131072, 00:09:25.947 "runtime": 1.424345, 00:09:25.947 "iops": 11880.5486030421, 00:09:25.947 "mibps": 1485.0685753802625, 00:09:25.947 "io_failed": 0, 00:09:25.947 "io_timeout": 0, 00:09:25.947 "avg_latency_us": 80.7145546948029, 00:09:25.947 "min_latency_us": 36.77090909090909, 00:09:25.947 "max_latency_us": 1571.3745454545453 00:09:25.947 } 00:09:25.947 ], 00:09:25.947 "core_count": 1 00:09:25.947 } 00:09:25.947 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.947 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68628 00:09:25.947 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68628 ']' 00:09:25.947 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68628 00:09:25.947 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:25.947 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.947 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68628 00:09:25.947 killing process with pid 68628 00:09:25.947 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.947 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.947 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68628' 00:09:25.947 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68628 00:09:25.947 [2024-11-26 13:22:14.408177] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.947 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68628 00:09:26.206 [2024-11-26 13:22:14.566621] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.143 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Pv95zfrqPA 00:09:27.143 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:27.143 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:27.143 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:27.143 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:27.143 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.143 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:27.143 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:27.143 00:09:27.143 real 0m4.303s 00:09:27.143 user 0m5.371s 00:09:27.143 sys 0m0.533s 00:09:27.143 13:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.143 13:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.143 ************************************ 00:09:27.143 END TEST raid_read_error_test 00:09:27.143 ************************************ 00:09:27.143 13:22:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:27.143 13:22:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:27.143 13:22:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.143 13:22:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:27.143 ************************************ 00:09:27.143 START TEST raid_write_error_test 00:09:27.143 ************************************ 00:09:27.143 13:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:27.143 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:27.143 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:27.143 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:27.143 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:27.143 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.143 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:27.143 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.143 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.143 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.d0TFGhl0io 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68768 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68768 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 68768 ']' 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.144 13:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.144 [2024-11-26 13:22:15.630297] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:09:27.144 [2024-11-26 13:22:15.630485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68768 ] 00:09:27.402 [2024-11-26 13:22:15.813518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.402 [2024-11-26 13:22:15.911632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.661 [2024-11-26 13:22:16.080889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.661 [2024-11-26 13:22:16.080954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.229 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.229 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:28.229 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.229 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:28.229 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.229 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.229 BaseBdev1_malloc 00:09:28.229 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.229 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:28.229 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.229 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.229 true 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.230 [2024-11-26 13:22:16.620200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:28.230 [2024-11-26 13:22:16.620280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.230 [2024-11-26 13:22:16.620306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:28.230 [2024-11-26 13:22:16.620322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.230 [2024-11-26 13:22:16.622617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.230 [2024-11-26 13:22:16.622661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:28.230 BaseBdev1 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.230 BaseBdev2_malloc 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.230 true 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.230 [2024-11-26 13:22:16.670176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:28.230 [2024-11-26 13:22:16.670239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.230 [2024-11-26 13:22:16.670263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:28.230 [2024-11-26 13:22:16.670310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.230 [2024-11-26 13:22:16.672642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.230 [2024-11-26 13:22:16.672682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:28.230 BaseBdev2 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.230 BaseBdev3_malloc 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.230 true 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.230 [2024-11-26 13:22:16.737533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:28.230 [2024-11-26 13:22:16.737585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.230 [2024-11-26 13:22:16.737607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:28.230 [2024-11-26 13:22:16.737622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.230 [2024-11-26 13:22:16.739919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.230 [2024-11-26 13:22:16.739964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:28.230 BaseBdev3 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.230 [2024-11-26 13:22:16.745598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.230 [2024-11-26 13:22:16.747641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.230 [2024-11-26 13:22:16.747733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.230 [2024-11-26 13:22:16.747961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:28.230 [2024-11-26 13:22:16.747985] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:28.230 [2024-11-26 13:22:16.748260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:28.230 [2024-11-26 13:22:16.748462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:28.230 [2024-11-26 13:22:16.748488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:28.230 [2024-11-26 13:22:16.748644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.230 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.488 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.488 "name": "raid_bdev1", 00:09:28.488 "uuid": "ae716557-597d-447b-bcf1-8483cc8828b7", 00:09:28.488 "strip_size_kb": 0, 00:09:28.488 "state": "online", 00:09:28.488 "raid_level": "raid1", 00:09:28.488 "superblock": true, 00:09:28.488 "num_base_bdevs": 3, 00:09:28.488 "num_base_bdevs_discovered": 3, 00:09:28.488 "num_base_bdevs_operational": 3, 00:09:28.488 "base_bdevs_list": [ 00:09:28.488 { 00:09:28.488 "name": "BaseBdev1", 00:09:28.488 "uuid": "c9a2ea6e-7787-55bd-a4a2-c54a5bf92efa", 00:09:28.488 "is_configured": true, 00:09:28.488 "data_offset": 2048, 00:09:28.488 "data_size": 63488 00:09:28.488 }, 00:09:28.488 { 00:09:28.488 "name": "BaseBdev2", 00:09:28.488 "uuid": "dec0d3fd-26a9-549d-8c91-7181d7c59bb3", 00:09:28.488 "is_configured": true, 00:09:28.488 "data_offset": 2048, 00:09:28.488 "data_size": 63488 00:09:28.488 }, 00:09:28.488 { 00:09:28.488 "name": "BaseBdev3", 00:09:28.488 "uuid": "96ed668a-f8dd-515a-b651-8650a4b0db8a", 00:09:28.488 "is_configured": true, 00:09:28.488 "data_offset": 2048, 00:09:28.488 "data_size": 63488 00:09:28.488 } 00:09:28.488 ] 00:09:28.488 }' 00:09:28.488 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.488 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.746 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:28.746 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:29.004 [2024-11-26 13:22:17.378837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.941 [2024-11-26 13:22:18.260585] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:29.941 [2024-11-26 13:22:18.260642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.941 [2024-11-26 13:22:18.260875] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.941 "name": "raid_bdev1", 00:09:29.941 "uuid": "ae716557-597d-447b-bcf1-8483cc8828b7", 00:09:29.941 "strip_size_kb": 0, 00:09:29.941 "state": "online", 00:09:29.941 "raid_level": "raid1", 00:09:29.941 "superblock": true, 00:09:29.941 "num_base_bdevs": 3, 00:09:29.941 "num_base_bdevs_discovered": 2, 00:09:29.941 "num_base_bdevs_operational": 2, 00:09:29.941 "base_bdevs_list": [ 00:09:29.941 { 00:09:29.941 "name": null, 00:09:29.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.941 "is_configured": false, 00:09:29.941 "data_offset": 0, 00:09:29.941 "data_size": 63488 00:09:29.941 }, 00:09:29.941 { 00:09:29.941 "name": "BaseBdev2", 00:09:29.941 "uuid": "dec0d3fd-26a9-549d-8c91-7181d7c59bb3", 00:09:29.941 "is_configured": true, 00:09:29.941 "data_offset": 2048, 00:09:29.941 "data_size": 63488 00:09:29.941 }, 00:09:29.941 { 00:09:29.941 "name": "BaseBdev3", 00:09:29.941 "uuid": "96ed668a-f8dd-515a-b651-8650a4b0db8a", 00:09:29.941 "is_configured": true, 00:09:29.941 "data_offset": 2048, 00:09:29.941 "data_size": 63488 00:09:29.941 } 00:09:29.941 ] 00:09:29.941 }' 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.941 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.509 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:30.509 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.509 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.509 [2024-11-26 13:22:18.792529] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.509 [2024-11-26 13:22:18.792567] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.509 [2024-11-26 13:22:18.795142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.509 [2024-11-26 13:22:18.795209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.509 [2024-11-26 13:22:18.795309] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.509 [2024-11-26 13:22:18.795331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:30.509 { 00:09:30.509 "results": [ 00:09:30.509 { 00:09:30.509 "job": "raid_bdev1", 00:09:30.509 "core_mask": "0x1", 00:09:30.509 "workload": "randrw", 00:09:30.509 "percentage": 50, 00:09:30.509 "status": "finished", 00:09:30.509 "queue_depth": 1, 00:09:30.509 "io_size": 131072, 00:09:30.509 "runtime": 1.411767, 00:09:30.509 "iops": 13399.519892446842, 00:09:30.509 "mibps": 1674.9399865558553, 00:09:30.509 "io_failed": 0, 00:09:30.509 "io_timeout": 0, 00:09:30.509 "avg_latency_us": 71.15003090053679, 00:09:30.509 "min_latency_us": 37.00363636363636, 00:09:30.509 "max_latency_us": 1489.4545454545455 00:09:30.509 } 00:09:30.509 ], 00:09:30.509 "core_count": 1 00:09:30.509 } 00:09:30.509 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.509 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68768 00:09:30.509 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 68768 ']' 00:09:30.509 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 68768 00:09:30.509 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:30.509 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.509 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68768 00:09:30.509 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.509 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.509 killing process with pid 68768 00:09:30.509 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68768' 00:09:30.509 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 68768 00:09:30.509 [2024-11-26 13:22:18.831764] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.509 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 68768 00:09:30.509 [2024-11-26 13:22:18.986314] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.444 13:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.d0TFGhl0io 00:09:31.444 13:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:31.444 13:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:31.444 13:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:31.444 13:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:31.444 13:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:31.444 13:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:31.444 13:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:31.444 00:09:31.444 real 0m4.359s 00:09:31.444 user 0m5.456s 00:09:31.444 sys 0m0.578s 00:09:31.444 13:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.444 13:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.444 ************************************ 00:09:31.444 END TEST raid_write_error_test 00:09:31.444 ************************************ 00:09:31.444 13:22:19 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:31.444 13:22:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:31.444 13:22:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:31.444 13:22:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:31.444 13:22:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.444 13:22:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.444 ************************************ 00:09:31.444 START TEST raid_state_function_test 00:09:31.444 ************************************ 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=68912 00:09:31.444 Process raid pid: 68912 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68912' 00:09:31.444 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:31.445 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 68912 00:09:31.445 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 68912 ']' 00:09:31.445 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.445 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.445 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.445 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.445 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.703 [2024-11-26 13:22:20.041548] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:09:31.703 [2024-11-26 13:22:20.041725] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.703 [2024-11-26 13:22:20.234305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.960 [2024-11-26 13:22:20.382797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.238 [2024-11-26 13:22:20.556223] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.238 [2024-11-26 13:22:20.556276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.570 [2024-11-26 13:22:21.028908] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.570 [2024-11-26 13:22:21.028962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.570 [2024-11-26 13:22:21.028976] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.570 [2024-11-26 13:22:21.028991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.570 [2024-11-26 13:22:21.028998] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.570 [2024-11-26 13:22:21.029010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.570 [2024-11-26 13:22:21.029018] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:32.570 [2024-11-26 13:22:21.029029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.570 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.851 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.851 "name": "Existed_Raid", 00:09:32.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.851 "strip_size_kb": 64, 00:09:32.851 "state": "configuring", 00:09:32.851 "raid_level": "raid0", 00:09:32.851 "superblock": false, 00:09:32.851 "num_base_bdevs": 4, 00:09:32.851 "num_base_bdevs_discovered": 0, 00:09:32.851 "num_base_bdevs_operational": 4, 00:09:32.851 "base_bdevs_list": [ 00:09:32.851 { 00:09:32.851 "name": "BaseBdev1", 00:09:32.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.851 "is_configured": false, 00:09:32.851 "data_offset": 0, 00:09:32.851 "data_size": 0 00:09:32.851 }, 00:09:32.851 { 00:09:32.851 "name": "BaseBdev2", 00:09:32.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.851 "is_configured": false, 00:09:32.851 "data_offset": 0, 00:09:32.851 "data_size": 0 00:09:32.851 }, 00:09:32.851 { 00:09:32.851 "name": "BaseBdev3", 00:09:32.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.851 "is_configured": false, 00:09:32.851 "data_offset": 0, 00:09:32.851 "data_size": 0 00:09:32.851 }, 00:09:32.851 { 00:09:32.851 "name": "BaseBdev4", 00:09:32.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.851 "is_configured": false, 00:09:32.851 "data_offset": 0, 00:09:32.851 "data_size": 0 00:09:32.852 } 00:09:32.852 ] 00:09:32.852 }' 00:09:32.852 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.110 [2024-11-26 13:22:21.552952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.110 [2024-11-26 13:22:21.552990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.110 [2024-11-26 13:22:21.560955] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.110 [2024-11-26 13:22:21.560996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.110 [2024-11-26 13:22:21.561008] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.110 [2024-11-26 13:22:21.561021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.110 [2024-11-26 13:22:21.561029] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.110 [2024-11-26 13:22:21.561040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.110 [2024-11-26 13:22:21.561048] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:33.110 [2024-11-26 13:22:21.561060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.110 [2024-11-26 13:22:21.600172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.110 BaseBdev1 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.110 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.111 [ 00:09:33.111 { 00:09:33.111 "name": "BaseBdev1", 00:09:33.111 "aliases": [ 00:09:33.111 "7ed15ed4-a328-475b-8d6a-ad96cab2286b" 00:09:33.111 ], 00:09:33.111 "product_name": "Malloc disk", 00:09:33.111 "block_size": 512, 00:09:33.111 "num_blocks": 65536, 00:09:33.111 "uuid": "7ed15ed4-a328-475b-8d6a-ad96cab2286b", 00:09:33.111 "assigned_rate_limits": { 00:09:33.111 "rw_ios_per_sec": 0, 00:09:33.111 "rw_mbytes_per_sec": 0, 00:09:33.111 "r_mbytes_per_sec": 0, 00:09:33.111 "w_mbytes_per_sec": 0 00:09:33.111 }, 00:09:33.111 "claimed": true, 00:09:33.111 "claim_type": "exclusive_write", 00:09:33.111 "zoned": false, 00:09:33.111 "supported_io_types": { 00:09:33.111 "read": true, 00:09:33.111 "write": true, 00:09:33.111 "unmap": true, 00:09:33.111 "flush": true, 00:09:33.111 "reset": true, 00:09:33.111 "nvme_admin": false, 00:09:33.111 "nvme_io": false, 00:09:33.111 "nvme_io_md": false, 00:09:33.111 "write_zeroes": true, 00:09:33.111 "zcopy": true, 00:09:33.111 "get_zone_info": false, 00:09:33.111 "zone_management": false, 00:09:33.111 "zone_append": false, 00:09:33.111 "compare": false, 00:09:33.111 "compare_and_write": false, 00:09:33.111 "abort": true, 00:09:33.111 "seek_hole": false, 00:09:33.111 "seek_data": false, 00:09:33.111 "copy": true, 00:09:33.111 "nvme_iov_md": false 00:09:33.111 }, 00:09:33.111 "memory_domains": [ 00:09:33.111 { 00:09:33.111 "dma_device_id": "system", 00:09:33.111 "dma_device_type": 1 00:09:33.111 }, 00:09:33.111 { 00:09:33.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.111 "dma_device_type": 2 00:09:33.111 } 00:09:33.111 ], 00:09:33.111 "driver_specific": {} 00:09:33.111 } 00:09:33.111 ] 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.111 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.368 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.368 "name": "Existed_Raid", 00:09:33.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.368 "strip_size_kb": 64, 00:09:33.368 "state": "configuring", 00:09:33.368 "raid_level": "raid0", 00:09:33.368 "superblock": false, 00:09:33.368 "num_base_bdevs": 4, 00:09:33.368 "num_base_bdevs_discovered": 1, 00:09:33.368 "num_base_bdevs_operational": 4, 00:09:33.368 "base_bdevs_list": [ 00:09:33.368 { 00:09:33.368 "name": "BaseBdev1", 00:09:33.368 "uuid": "7ed15ed4-a328-475b-8d6a-ad96cab2286b", 00:09:33.368 "is_configured": true, 00:09:33.368 "data_offset": 0, 00:09:33.368 "data_size": 65536 00:09:33.369 }, 00:09:33.369 { 00:09:33.369 "name": "BaseBdev2", 00:09:33.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.369 "is_configured": false, 00:09:33.369 "data_offset": 0, 00:09:33.369 "data_size": 0 00:09:33.369 }, 00:09:33.369 { 00:09:33.369 "name": "BaseBdev3", 00:09:33.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.369 "is_configured": false, 00:09:33.369 "data_offset": 0, 00:09:33.369 "data_size": 0 00:09:33.369 }, 00:09:33.369 { 00:09:33.369 "name": "BaseBdev4", 00:09:33.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.369 "is_configured": false, 00:09:33.369 "data_offset": 0, 00:09:33.369 "data_size": 0 00:09:33.369 } 00:09:33.369 ] 00:09:33.369 }' 00:09:33.369 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.369 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.627 [2024-11-26 13:22:22.148316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.627 [2024-11-26 13:22:22.148353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.627 [2024-11-26 13:22:22.160379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.627 [2024-11-26 13:22:22.162587] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.627 [2024-11-26 13:22:22.162788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.627 [2024-11-26 13:22:22.162898] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.627 [2024-11-26 13:22:22.162954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.627 [2024-11-26 13:22:22.163199] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:33.627 [2024-11-26 13:22:22.163284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.627 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.885 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.885 "name": "Existed_Raid", 00:09:33.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.885 "strip_size_kb": 64, 00:09:33.885 "state": "configuring", 00:09:33.885 "raid_level": "raid0", 00:09:33.885 "superblock": false, 00:09:33.885 "num_base_bdevs": 4, 00:09:33.885 "num_base_bdevs_discovered": 1, 00:09:33.885 "num_base_bdevs_operational": 4, 00:09:33.885 "base_bdevs_list": [ 00:09:33.885 { 00:09:33.885 "name": "BaseBdev1", 00:09:33.885 "uuid": "7ed15ed4-a328-475b-8d6a-ad96cab2286b", 00:09:33.885 "is_configured": true, 00:09:33.885 "data_offset": 0, 00:09:33.885 "data_size": 65536 00:09:33.885 }, 00:09:33.885 { 00:09:33.885 "name": "BaseBdev2", 00:09:33.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.885 "is_configured": false, 00:09:33.885 "data_offset": 0, 00:09:33.885 "data_size": 0 00:09:33.885 }, 00:09:33.885 { 00:09:33.885 "name": "BaseBdev3", 00:09:33.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.885 "is_configured": false, 00:09:33.885 "data_offset": 0, 00:09:33.885 "data_size": 0 00:09:33.885 }, 00:09:33.885 { 00:09:33.885 "name": "BaseBdev4", 00:09:33.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.885 "is_configured": false, 00:09:33.885 "data_offset": 0, 00:09:33.885 "data_size": 0 00:09:33.885 } 00:09:33.885 ] 00:09:33.885 }' 00:09:33.885 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.885 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.142 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.142 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.142 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.400 [2024-11-26 13:22:22.713149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.400 BaseBdev2 00:09:34.400 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.400 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:34.400 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:34.400 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.400 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:34.400 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.400 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.400 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.400 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.400 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.401 [ 00:09:34.401 { 00:09:34.401 "name": "BaseBdev2", 00:09:34.401 "aliases": [ 00:09:34.401 "df3ee0e3-9c32-466b-995d-f48a35da7b0c" 00:09:34.401 ], 00:09:34.401 "product_name": "Malloc disk", 00:09:34.401 "block_size": 512, 00:09:34.401 "num_blocks": 65536, 00:09:34.401 "uuid": "df3ee0e3-9c32-466b-995d-f48a35da7b0c", 00:09:34.401 "assigned_rate_limits": { 00:09:34.401 "rw_ios_per_sec": 0, 00:09:34.401 "rw_mbytes_per_sec": 0, 00:09:34.401 "r_mbytes_per_sec": 0, 00:09:34.401 "w_mbytes_per_sec": 0 00:09:34.401 }, 00:09:34.401 "claimed": true, 00:09:34.401 "claim_type": "exclusive_write", 00:09:34.401 "zoned": false, 00:09:34.401 "supported_io_types": { 00:09:34.401 "read": true, 00:09:34.401 "write": true, 00:09:34.401 "unmap": true, 00:09:34.401 "flush": true, 00:09:34.401 "reset": true, 00:09:34.401 "nvme_admin": false, 00:09:34.401 "nvme_io": false, 00:09:34.401 "nvme_io_md": false, 00:09:34.401 "write_zeroes": true, 00:09:34.401 "zcopy": true, 00:09:34.401 "get_zone_info": false, 00:09:34.401 "zone_management": false, 00:09:34.401 "zone_append": false, 00:09:34.401 "compare": false, 00:09:34.401 "compare_and_write": false, 00:09:34.401 "abort": true, 00:09:34.401 "seek_hole": false, 00:09:34.401 "seek_data": false, 00:09:34.401 "copy": true, 00:09:34.401 "nvme_iov_md": false 00:09:34.401 }, 00:09:34.401 "memory_domains": [ 00:09:34.401 { 00:09:34.401 "dma_device_id": "system", 00:09:34.401 "dma_device_type": 1 00:09:34.401 }, 00:09:34.401 { 00:09:34.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.401 "dma_device_type": 2 00:09:34.401 } 00:09:34.401 ], 00:09:34.401 "driver_specific": {} 00:09:34.401 } 00:09:34.401 ] 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.401 "name": "Existed_Raid", 00:09:34.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.401 "strip_size_kb": 64, 00:09:34.401 "state": "configuring", 00:09:34.401 "raid_level": "raid0", 00:09:34.401 "superblock": false, 00:09:34.401 "num_base_bdevs": 4, 00:09:34.401 "num_base_bdevs_discovered": 2, 00:09:34.401 "num_base_bdevs_operational": 4, 00:09:34.401 "base_bdevs_list": [ 00:09:34.401 { 00:09:34.401 "name": "BaseBdev1", 00:09:34.401 "uuid": "7ed15ed4-a328-475b-8d6a-ad96cab2286b", 00:09:34.401 "is_configured": true, 00:09:34.401 "data_offset": 0, 00:09:34.401 "data_size": 65536 00:09:34.401 }, 00:09:34.401 { 00:09:34.401 "name": "BaseBdev2", 00:09:34.401 "uuid": "df3ee0e3-9c32-466b-995d-f48a35da7b0c", 00:09:34.401 "is_configured": true, 00:09:34.401 "data_offset": 0, 00:09:34.401 "data_size": 65536 00:09:34.401 }, 00:09:34.401 { 00:09:34.401 "name": "BaseBdev3", 00:09:34.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.401 "is_configured": false, 00:09:34.401 "data_offset": 0, 00:09:34.401 "data_size": 0 00:09:34.401 }, 00:09:34.401 { 00:09:34.401 "name": "BaseBdev4", 00:09:34.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.401 "is_configured": false, 00:09:34.401 "data_offset": 0, 00:09:34.401 "data_size": 0 00:09:34.401 } 00:09:34.401 ] 00:09:34.401 }' 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.401 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.968 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.968 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.968 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.968 [2024-11-26 13:22:23.307554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.968 BaseBdev3 00:09:34.968 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.968 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:34.968 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:34.968 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.968 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:34.968 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.968 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.969 [ 00:09:34.969 { 00:09:34.969 "name": "BaseBdev3", 00:09:34.969 "aliases": [ 00:09:34.969 "88a688a7-3e52-4881-a630-22cdc37ecb9e" 00:09:34.969 ], 00:09:34.969 "product_name": "Malloc disk", 00:09:34.969 "block_size": 512, 00:09:34.969 "num_blocks": 65536, 00:09:34.969 "uuid": "88a688a7-3e52-4881-a630-22cdc37ecb9e", 00:09:34.969 "assigned_rate_limits": { 00:09:34.969 "rw_ios_per_sec": 0, 00:09:34.969 "rw_mbytes_per_sec": 0, 00:09:34.969 "r_mbytes_per_sec": 0, 00:09:34.969 "w_mbytes_per_sec": 0 00:09:34.969 }, 00:09:34.969 "claimed": true, 00:09:34.969 "claim_type": "exclusive_write", 00:09:34.969 "zoned": false, 00:09:34.969 "supported_io_types": { 00:09:34.969 "read": true, 00:09:34.969 "write": true, 00:09:34.969 "unmap": true, 00:09:34.969 "flush": true, 00:09:34.969 "reset": true, 00:09:34.969 "nvme_admin": false, 00:09:34.969 "nvme_io": false, 00:09:34.969 "nvme_io_md": false, 00:09:34.969 "write_zeroes": true, 00:09:34.969 "zcopy": true, 00:09:34.969 "get_zone_info": false, 00:09:34.969 "zone_management": false, 00:09:34.969 "zone_append": false, 00:09:34.969 "compare": false, 00:09:34.969 "compare_and_write": false, 00:09:34.969 "abort": true, 00:09:34.969 "seek_hole": false, 00:09:34.969 "seek_data": false, 00:09:34.969 "copy": true, 00:09:34.969 "nvme_iov_md": false 00:09:34.969 }, 00:09:34.969 "memory_domains": [ 00:09:34.969 { 00:09:34.969 "dma_device_id": "system", 00:09:34.969 "dma_device_type": 1 00:09:34.969 }, 00:09:34.969 { 00:09:34.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.969 "dma_device_type": 2 00:09:34.969 } 00:09:34.969 ], 00:09:34.969 "driver_specific": {} 00:09:34.969 } 00:09:34.969 ] 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.969 "name": "Existed_Raid", 00:09:34.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.969 "strip_size_kb": 64, 00:09:34.969 "state": "configuring", 00:09:34.969 "raid_level": "raid0", 00:09:34.969 "superblock": false, 00:09:34.969 "num_base_bdevs": 4, 00:09:34.969 "num_base_bdevs_discovered": 3, 00:09:34.969 "num_base_bdevs_operational": 4, 00:09:34.969 "base_bdevs_list": [ 00:09:34.969 { 00:09:34.969 "name": "BaseBdev1", 00:09:34.969 "uuid": "7ed15ed4-a328-475b-8d6a-ad96cab2286b", 00:09:34.969 "is_configured": true, 00:09:34.969 "data_offset": 0, 00:09:34.969 "data_size": 65536 00:09:34.969 }, 00:09:34.969 { 00:09:34.969 "name": "BaseBdev2", 00:09:34.969 "uuid": "df3ee0e3-9c32-466b-995d-f48a35da7b0c", 00:09:34.969 "is_configured": true, 00:09:34.969 "data_offset": 0, 00:09:34.969 "data_size": 65536 00:09:34.969 }, 00:09:34.969 { 00:09:34.969 "name": "BaseBdev3", 00:09:34.969 "uuid": "88a688a7-3e52-4881-a630-22cdc37ecb9e", 00:09:34.969 "is_configured": true, 00:09:34.969 "data_offset": 0, 00:09:34.969 "data_size": 65536 00:09:34.969 }, 00:09:34.969 { 00:09:34.969 "name": "BaseBdev4", 00:09:34.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.969 "is_configured": false, 00:09:34.969 "data_offset": 0, 00:09:34.969 "data_size": 0 00:09:34.969 } 00:09:34.969 ] 00:09:34.969 }' 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.969 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.537 [2024-11-26 13:22:23.896362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:35.537 [2024-11-26 13:22:23.896404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:35.537 [2024-11-26 13:22:23.896417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:35.537 [2024-11-26 13:22:23.896699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:35.537 [2024-11-26 13:22:23.896885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:35.537 [2024-11-26 13:22:23.896908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:35.537 [2024-11-26 13:22:23.897151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.537 BaseBdev4 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.537 [ 00:09:35.537 { 00:09:35.537 "name": "BaseBdev4", 00:09:35.537 "aliases": [ 00:09:35.537 "7b62ae8e-71fa-4abf-8bbc-4d7c378b6198" 00:09:35.537 ], 00:09:35.537 "product_name": "Malloc disk", 00:09:35.537 "block_size": 512, 00:09:35.537 "num_blocks": 65536, 00:09:35.537 "uuid": "7b62ae8e-71fa-4abf-8bbc-4d7c378b6198", 00:09:35.537 "assigned_rate_limits": { 00:09:35.537 "rw_ios_per_sec": 0, 00:09:35.537 "rw_mbytes_per_sec": 0, 00:09:35.537 "r_mbytes_per_sec": 0, 00:09:35.537 "w_mbytes_per_sec": 0 00:09:35.537 }, 00:09:35.537 "claimed": true, 00:09:35.537 "claim_type": "exclusive_write", 00:09:35.537 "zoned": false, 00:09:35.537 "supported_io_types": { 00:09:35.537 "read": true, 00:09:35.537 "write": true, 00:09:35.537 "unmap": true, 00:09:35.537 "flush": true, 00:09:35.537 "reset": true, 00:09:35.537 "nvme_admin": false, 00:09:35.537 "nvme_io": false, 00:09:35.537 "nvme_io_md": false, 00:09:35.537 "write_zeroes": true, 00:09:35.537 "zcopy": true, 00:09:35.537 "get_zone_info": false, 00:09:35.537 "zone_management": false, 00:09:35.537 "zone_append": false, 00:09:35.537 "compare": false, 00:09:35.537 "compare_and_write": false, 00:09:35.537 "abort": true, 00:09:35.537 "seek_hole": false, 00:09:35.537 "seek_data": false, 00:09:35.537 "copy": true, 00:09:35.537 "nvme_iov_md": false 00:09:35.537 }, 00:09:35.537 "memory_domains": [ 00:09:35.537 { 00:09:35.537 "dma_device_id": "system", 00:09:35.537 "dma_device_type": 1 00:09:35.537 }, 00:09:35.537 { 00:09:35.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.537 "dma_device_type": 2 00:09:35.537 } 00:09:35.537 ], 00:09:35.537 "driver_specific": {} 00:09:35.537 } 00:09:35.537 ] 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.537 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.537 "name": "Existed_Raid", 00:09:35.537 "uuid": "40cc54c2-121b-4dcd-98b3-0b3827e68c63", 00:09:35.537 "strip_size_kb": 64, 00:09:35.537 "state": "online", 00:09:35.537 "raid_level": "raid0", 00:09:35.537 "superblock": false, 00:09:35.537 "num_base_bdevs": 4, 00:09:35.537 "num_base_bdevs_discovered": 4, 00:09:35.537 "num_base_bdevs_operational": 4, 00:09:35.537 "base_bdevs_list": [ 00:09:35.537 { 00:09:35.537 "name": "BaseBdev1", 00:09:35.537 "uuid": "7ed15ed4-a328-475b-8d6a-ad96cab2286b", 00:09:35.537 "is_configured": true, 00:09:35.537 "data_offset": 0, 00:09:35.537 "data_size": 65536 00:09:35.537 }, 00:09:35.537 { 00:09:35.537 "name": "BaseBdev2", 00:09:35.537 "uuid": "df3ee0e3-9c32-466b-995d-f48a35da7b0c", 00:09:35.537 "is_configured": true, 00:09:35.537 "data_offset": 0, 00:09:35.537 "data_size": 65536 00:09:35.537 }, 00:09:35.538 { 00:09:35.538 "name": "BaseBdev3", 00:09:35.538 "uuid": "88a688a7-3e52-4881-a630-22cdc37ecb9e", 00:09:35.538 "is_configured": true, 00:09:35.538 "data_offset": 0, 00:09:35.538 "data_size": 65536 00:09:35.538 }, 00:09:35.538 { 00:09:35.538 "name": "BaseBdev4", 00:09:35.538 "uuid": "7b62ae8e-71fa-4abf-8bbc-4d7c378b6198", 00:09:35.538 "is_configured": true, 00:09:35.538 "data_offset": 0, 00:09:35.538 "data_size": 65536 00:09:35.538 } 00:09:35.538 ] 00:09:35.538 }' 00:09:35.538 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.538 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.106 [2024-11-26 13:22:24.460854] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:36.106 "name": "Existed_Raid", 00:09:36.106 "aliases": [ 00:09:36.106 "40cc54c2-121b-4dcd-98b3-0b3827e68c63" 00:09:36.106 ], 00:09:36.106 "product_name": "Raid Volume", 00:09:36.106 "block_size": 512, 00:09:36.106 "num_blocks": 262144, 00:09:36.106 "uuid": "40cc54c2-121b-4dcd-98b3-0b3827e68c63", 00:09:36.106 "assigned_rate_limits": { 00:09:36.106 "rw_ios_per_sec": 0, 00:09:36.106 "rw_mbytes_per_sec": 0, 00:09:36.106 "r_mbytes_per_sec": 0, 00:09:36.106 "w_mbytes_per_sec": 0 00:09:36.106 }, 00:09:36.106 "claimed": false, 00:09:36.106 "zoned": false, 00:09:36.106 "supported_io_types": { 00:09:36.106 "read": true, 00:09:36.106 "write": true, 00:09:36.106 "unmap": true, 00:09:36.106 "flush": true, 00:09:36.106 "reset": true, 00:09:36.106 "nvme_admin": false, 00:09:36.106 "nvme_io": false, 00:09:36.106 "nvme_io_md": false, 00:09:36.106 "write_zeroes": true, 00:09:36.106 "zcopy": false, 00:09:36.106 "get_zone_info": false, 00:09:36.106 "zone_management": false, 00:09:36.106 "zone_append": false, 00:09:36.106 "compare": false, 00:09:36.106 "compare_and_write": false, 00:09:36.106 "abort": false, 00:09:36.106 "seek_hole": false, 00:09:36.106 "seek_data": false, 00:09:36.106 "copy": false, 00:09:36.106 "nvme_iov_md": false 00:09:36.106 }, 00:09:36.106 "memory_domains": [ 00:09:36.106 { 00:09:36.106 "dma_device_id": "system", 00:09:36.106 "dma_device_type": 1 00:09:36.106 }, 00:09:36.106 { 00:09:36.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.106 "dma_device_type": 2 00:09:36.106 }, 00:09:36.106 { 00:09:36.106 "dma_device_id": "system", 00:09:36.106 "dma_device_type": 1 00:09:36.106 }, 00:09:36.106 { 00:09:36.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.106 "dma_device_type": 2 00:09:36.106 }, 00:09:36.106 { 00:09:36.106 "dma_device_id": "system", 00:09:36.106 "dma_device_type": 1 00:09:36.106 }, 00:09:36.106 { 00:09:36.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.106 "dma_device_type": 2 00:09:36.106 }, 00:09:36.106 { 00:09:36.106 "dma_device_id": "system", 00:09:36.106 "dma_device_type": 1 00:09:36.106 }, 00:09:36.106 { 00:09:36.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.106 "dma_device_type": 2 00:09:36.106 } 00:09:36.106 ], 00:09:36.106 "driver_specific": { 00:09:36.106 "raid": { 00:09:36.106 "uuid": "40cc54c2-121b-4dcd-98b3-0b3827e68c63", 00:09:36.106 "strip_size_kb": 64, 00:09:36.106 "state": "online", 00:09:36.106 "raid_level": "raid0", 00:09:36.106 "superblock": false, 00:09:36.106 "num_base_bdevs": 4, 00:09:36.106 "num_base_bdevs_discovered": 4, 00:09:36.106 "num_base_bdevs_operational": 4, 00:09:36.106 "base_bdevs_list": [ 00:09:36.106 { 00:09:36.106 "name": "BaseBdev1", 00:09:36.106 "uuid": "7ed15ed4-a328-475b-8d6a-ad96cab2286b", 00:09:36.106 "is_configured": true, 00:09:36.106 "data_offset": 0, 00:09:36.106 "data_size": 65536 00:09:36.106 }, 00:09:36.106 { 00:09:36.106 "name": "BaseBdev2", 00:09:36.106 "uuid": "df3ee0e3-9c32-466b-995d-f48a35da7b0c", 00:09:36.106 "is_configured": true, 00:09:36.106 "data_offset": 0, 00:09:36.106 "data_size": 65536 00:09:36.106 }, 00:09:36.106 { 00:09:36.106 "name": "BaseBdev3", 00:09:36.106 "uuid": "88a688a7-3e52-4881-a630-22cdc37ecb9e", 00:09:36.106 "is_configured": true, 00:09:36.106 "data_offset": 0, 00:09:36.106 "data_size": 65536 00:09:36.106 }, 00:09:36.106 { 00:09:36.106 "name": "BaseBdev4", 00:09:36.106 "uuid": "7b62ae8e-71fa-4abf-8bbc-4d7c378b6198", 00:09:36.106 "is_configured": true, 00:09:36.106 "data_offset": 0, 00:09:36.106 "data_size": 65536 00:09:36.106 } 00:09:36.106 ] 00:09:36.106 } 00:09:36.106 } 00:09:36.106 }' 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:36.106 BaseBdev2 00:09:36.106 BaseBdev3 00:09:36.106 BaseBdev4' 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.106 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:36.107 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.107 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.107 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.107 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.107 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.107 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:36.366 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.367 [2024-11-26 13:22:24.840660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.367 [2024-11-26 13:22:24.840690] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.367 [2024-11-26 13:22:24.840738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.367 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.626 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.626 "name": "Existed_Raid", 00:09:36.626 "uuid": "40cc54c2-121b-4dcd-98b3-0b3827e68c63", 00:09:36.626 "strip_size_kb": 64, 00:09:36.626 "state": "offline", 00:09:36.626 "raid_level": "raid0", 00:09:36.626 "superblock": false, 00:09:36.626 "num_base_bdevs": 4, 00:09:36.626 "num_base_bdevs_discovered": 3, 00:09:36.626 "num_base_bdevs_operational": 3, 00:09:36.626 "base_bdevs_list": [ 00:09:36.626 { 00:09:36.626 "name": null, 00:09:36.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.626 "is_configured": false, 00:09:36.626 "data_offset": 0, 00:09:36.626 "data_size": 65536 00:09:36.626 }, 00:09:36.626 { 00:09:36.626 "name": "BaseBdev2", 00:09:36.626 "uuid": "df3ee0e3-9c32-466b-995d-f48a35da7b0c", 00:09:36.626 "is_configured": true, 00:09:36.626 "data_offset": 0, 00:09:36.626 "data_size": 65536 00:09:36.626 }, 00:09:36.626 { 00:09:36.626 "name": "BaseBdev3", 00:09:36.626 "uuid": "88a688a7-3e52-4881-a630-22cdc37ecb9e", 00:09:36.626 "is_configured": true, 00:09:36.626 "data_offset": 0, 00:09:36.626 "data_size": 65536 00:09:36.626 }, 00:09:36.626 { 00:09:36.626 "name": "BaseBdev4", 00:09:36.626 "uuid": "7b62ae8e-71fa-4abf-8bbc-4d7c378b6198", 00:09:36.626 "is_configured": true, 00:09:36.626 "data_offset": 0, 00:09:36.626 "data_size": 65536 00:09:36.626 } 00:09:36.626 ] 00:09:36.626 }' 00:09:36.626 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.626 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.884 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:36.884 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.884 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.884 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.884 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.884 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.142 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.142 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.142 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.142 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:37.142 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.142 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.142 [2024-11-26 13:22:25.484797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.142 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.142 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.142 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.142 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.143 [2024-11-26 13:22:25.609026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.143 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.401 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.402 [2024-11-26 13:22:25.733608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:37.402 [2024-11-26 13:22:25.733655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.402 BaseBdev2 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.402 [ 00:09:37.402 { 00:09:37.402 "name": "BaseBdev2", 00:09:37.402 "aliases": [ 00:09:37.402 "95a89d98-211e-49af-99b7-cef3b1eb265e" 00:09:37.402 ], 00:09:37.402 "product_name": "Malloc disk", 00:09:37.402 "block_size": 512, 00:09:37.402 "num_blocks": 65536, 00:09:37.402 "uuid": "95a89d98-211e-49af-99b7-cef3b1eb265e", 00:09:37.402 "assigned_rate_limits": { 00:09:37.402 "rw_ios_per_sec": 0, 00:09:37.402 "rw_mbytes_per_sec": 0, 00:09:37.402 "r_mbytes_per_sec": 0, 00:09:37.402 "w_mbytes_per_sec": 0 00:09:37.402 }, 00:09:37.402 "claimed": false, 00:09:37.402 "zoned": false, 00:09:37.402 "supported_io_types": { 00:09:37.402 "read": true, 00:09:37.402 "write": true, 00:09:37.402 "unmap": true, 00:09:37.402 "flush": true, 00:09:37.402 "reset": true, 00:09:37.402 "nvme_admin": false, 00:09:37.402 "nvme_io": false, 00:09:37.402 "nvme_io_md": false, 00:09:37.402 "write_zeroes": true, 00:09:37.402 "zcopy": true, 00:09:37.402 "get_zone_info": false, 00:09:37.402 "zone_management": false, 00:09:37.402 "zone_append": false, 00:09:37.402 "compare": false, 00:09:37.402 "compare_and_write": false, 00:09:37.402 "abort": true, 00:09:37.402 "seek_hole": false, 00:09:37.402 "seek_data": false, 00:09:37.402 "copy": true, 00:09:37.402 "nvme_iov_md": false 00:09:37.402 }, 00:09:37.402 "memory_domains": [ 00:09:37.402 { 00:09:37.402 "dma_device_id": "system", 00:09:37.402 "dma_device_type": 1 00:09:37.402 }, 00:09:37.402 { 00:09:37.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.402 "dma_device_type": 2 00:09:37.402 } 00:09:37.402 ], 00:09:37.402 "driver_specific": {} 00:09:37.402 } 00:09:37.402 ] 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.402 BaseBdev3 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.402 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.662 [ 00:09:37.662 { 00:09:37.662 "name": "BaseBdev3", 00:09:37.662 "aliases": [ 00:09:37.662 "3f1722e4-1f2d-4429-bb3b-11dbda84bcfc" 00:09:37.662 ], 00:09:37.662 "product_name": "Malloc disk", 00:09:37.662 "block_size": 512, 00:09:37.662 "num_blocks": 65536, 00:09:37.662 "uuid": "3f1722e4-1f2d-4429-bb3b-11dbda84bcfc", 00:09:37.662 "assigned_rate_limits": { 00:09:37.662 "rw_ios_per_sec": 0, 00:09:37.662 "rw_mbytes_per_sec": 0, 00:09:37.662 "r_mbytes_per_sec": 0, 00:09:37.662 "w_mbytes_per_sec": 0 00:09:37.662 }, 00:09:37.662 "claimed": false, 00:09:37.662 "zoned": false, 00:09:37.662 "supported_io_types": { 00:09:37.662 "read": true, 00:09:37.662 "write": true, 00:09:37.662 "unmap": true, 00:09:37.662 "flush": true, 00:09:37.662 "reset": true, 00:09:37.662 "nvme_admin": false, 00:09:37.662 "nvme_io": false, 00:09:37.662 "nvme_io_md": false, 00:09:37.662 "write_zeroes": true, 00:09:37.662 "zcopy": true, 00:09:37.662 "get_zone_info": false, 00:09:37.662 "zone_management": false, 00:09:37.662 "zone_append": false, 00:09:37.662 "compare": false, 00:09:37.662 "compare_and_write": false, 00:09:37.662 "abort": true, 00:09:37.662 "seek_hole": false, 00:09:37.662 "seek_data": false, 00:09:37.662 "copy": true, 00:09:37.662 "nvme_iov_md": false 00:09:37.662 }, 00:09:37.662 "memory_domains": [ 00:09:37.662 { 00:09:37.662 "dma_device_id": "system", 00:09:37.662 "dma_device_type": 1 00:09:37.662 }, 00:09:37.662 { 00:09:37.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.662 "dma_device_type": 2 00:09:37.662 } 00:09:37.662 ], 00:09:37.662 "driver_specific": {} 00:09:37.662 } 00:09:37.662 ] 00:09:37.662 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.662 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.662 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.662 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.662 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:37.662 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.662 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.662 BaseBdev4 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.662 [ 00:09:37.662 { 00:09:37.662 "name": "BaseBdev4", 00:09:37.662 "aliases": [ 00:09:37.662 "dbb39612-172f-4ee9-b426-f68f11918a96" 00:09:37.662 ], 00:09:37.662 "product_name": "Malloc disk", 00:09:37.662 "block_size": 512, 00:09:37.662 "num_blocks": 65536, 00:09:37.662 "uuid": "dbb39612-172f-4ee9-b426-f68f11918a96", 00:09:37.662 "assigned_rate_limits": { 00:09:37.662 "rw_ios_per_sec": 0, 00:09:37.662 "rw_mbytes_per_sec": 0, 00:09:37.662 "r_mbytes_per_sec": 0, 00:09:37.662 "w_mbytes_per_sec": 0 00:09:37.662 }, 00:09:37.662 "claimed": false, 00:09:37.662 "zoned": false, 00:09:37.662 "supported_io_types": { 00:09:37.662 "read": true, 00:09:37.662 "write": true, 00:09:37.662 "unmap": true, 00:09:37.662 "flush": true, 00:09:37.662 "reset": true, 00:09:37.662 "nvme_admin": false, 00:09:37.662 "nvme_io": false, 00:09:37.662 "nvme_io_md": false, 00:09:37.662 "write_zeroes": true, 00:09:37.662 "zcopy": true, 00:09:37.662 "get_zone_info": false, 00:09:37.662 "zone_management": false, 00:09:37.662 "zone_append": false, 00:09:37.662 "compare": false, 00:09:37.662 "compare_and_write": false, 00:09:37.662 "abort": true, 00:09:37.662 "seek_hole": false, 00:09:37.662 "seek_data": false, 00:09:37.662 "copy": true, 00:09:37.662 "nvme_iov_md": false 00:09:37.662 }, 00:09:37.662 "memory_domains": [ 00:09:37.662 { 00:09:37.662 "dma_device_id": "system", 00:09:37.662 "dma_device_type": 1 00:09:37.662 }, 00:09:37.662 { 00:09:37.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.662 "dma_device_type": 2 00:09:37.662 } 00:09:37.662 ], 00:09:37.662 "driver_specific": {} 00:09:37.662 } 00:09:37.662 ] 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.662 [2024-11-26 13:22:26.058243] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.662 [2024-11-26 13:22:26.058435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.662 [2024-11-26 13:22:26.058604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.662 [2024-11-26 13:22:26.060725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.662 [2024-11-26 13:22:26.060790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.662 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.663 "name": "Existed_Raid", 00:09:37.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.663 "strip_size_kb": 64, 00:09:37.663 "state": "configuring", 00:09:37.663 "raid_level": "raid0", 00:09:37.663 "superblock": false, 00:09:37.663 "num_base_bdevs": 4, 00:09:37.663 "num_base_bdevs_discovered": 3, 00:09:37.663 "num_base_bdevs_operational": 4, 00:09:37.663 "base_bdevs_list": [ 00:09:37.663 { 00:09:37.663 "name": "BaseBdev1", 00:09:37.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.663 "is_configured": false, 00:09:37.663 "data_offset": 0, 00:09:37.663 "data_size": 0 00:09:37.663 }, 00:09:37.663 { 00:09:37.663 "name": "BaseBdev2", 00:09:37.663 "uuid": "95a89d98-211e-49af-99b7-cef3b1eb265e", 00:09:37.663 "is_configured": true, 00:09:37.663 "data_offset": 0, 00:09:37.663 "data_size": 65536 00:09:37.663 }, 00:09:37.663 { 00:09:37.663 "name": "BaseBdev3", 00:09:37.663 "uuid": "3f1722e4-1f2d-4429-bb3b-11dbda84bcfc", 00:09:37.663 "is_configured": true, 00:09:37.663 "data_offset": 0, 00:09:37.663 "data_size": 65536 00:09:37.663 }, 00:09:37.663 { 00:09:37.663 "name": "BaseBdev4", 00:09:37.663 "uuid": "dbb39612-172f-4ee9-b426-f68f11918a96", 00:09:37.663 "is_configured": true, 00:09:37.663 "data_offset": 0, 00:09:37.663 "data_size": 65536 00:09:37.663 } 00:09:37.663 ] 00:09:37.663 }' 00:09:37.663 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.663 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.231 [2024-11-26 13:22:26.578331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.231 "name": "Existed_Raid", 00:09:38.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.231 "strip_size_kb": 64, 00:09:38.231 "state": "configuring", 00:09:38.231 "raid_level": "raid0", 00:09:38.231 "superblock": false, 00:09:38.231 "num_base_bdevs": 4, 00:09:38.231 "num_base_bdevs_discovered": 2, 00:09:38.231 "num_base_bdevs_operational": 4, 00:09:38.231 "base_bdevs_list": [ 00:09:38.231 { 00:09:38.231 "name": "BaseBdev1", 00:09:38.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.231 "is_configured": false, 00:09:38.231 "data_offset": 0, 00:09:38.231 "data_size": 0 00:09:38.231 }, 00:09:38.231 { 00:09:38.231 "name": null, 00:09:38.231 "uuid": "95a89d98-211e-49af-99b7-cef3b1eb265e", 00:09:38.231 "is_configured": false, 00:09:38.231 "data_offset": 0, 00:09:38.231 "data_size": 65536 00:09:38.231 }, 00:09:38.231 { 00:09:38.231 "name": "BaseBdev3", 00:09:38.231 "uuid": "3f1722e4-1f2d-4429-bb3b-11dbda84bcfc", 00:09:38.231 "is_configured": true, 00:09:38.231 "data_offset": 0, 00:09:38.231 "data_size": 65536 00:09:38.231 }, 00:09:38.231 { 00:09:38.231 "name": "BaseBdev4", 00:09:38.231 "uuid": "dbb39612-172f-4ee9-b426-f68f11918a96", 00:09:38.231 "is_configured": true, 00:09:38.231 "data_offset": 0, 00:09:38.231 "data_size": 65536 00:09:38.231 } 00:09:38.231 ] 00:09:38.231 }' 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.231 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.799 [2024-11-26 13:22:27.202719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.799 BaseBdev1 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.799 [ 00:09:38.799 { 00:09:38.799 "name": "BaseBdev1", 00:09:38.799 "aliases": [ 00:09:38.799 "57772e2b-05fe-49d4-bb1f-cc13eec83ebb" 00:09:38.799 ], 00:09:38.799 "product_name": "Malloc disk", 00:09:38.799 "block_size": 512, 00:09:38.799 "num_blocks": 65536, 00:09:38.799 "uuid": "57772e2b-05fe-49d4-bb1f-cc13eec83ebb", 00:09:38.799 "assigned_rate_limits": { 00:09:38.799 "rw_ios_per_sec": 0, 00:09:38.799 "rw_mbytes_per_sec": 0, 00:09:38.799 "r_mbytes_per_sec": 0, 00:09:38.799 "w_mbytes_per_sec": 0 00:09:38.799 }, 00:09:38.799 "claimed": true, 00:09:38.799 "claim_type": "exclusive_write", 00:09:38.799 "zoned": false, 00:09:38.799 "supported_io_types": { 00:09:38.799 "read": true, 00:09:38.799 "write": true, 00:09:38.799 "unmap": true, 00:09:38.799 "flush": true, 00:09:38.799 "reset": true, 00:09:38.799 "nvme_admin": false, 00:09:38.799 "nvme_io": false, 00:09:38.799 "nvme_io_md": false, 00:09:38.799 "write_zeroes": true, 00:09:38.799 "zcopy": true, 00:09:38.799 "get_zone_info": false, 00:09:38.799 "zone_management": false, 00:09:38.799 "zone_append": false, 00:09:38.799 "compare": false, 00:09:38.799 "compare_and_write": false, 00:09:38.799 "abort": true, 00:09:38.799 "seek_hole": false, 00:09:38.799 "seek_data": false, 00:09:38.799 "copy": true, 00:09:38.799 "nvme_iov_md": false 00:09:38.799 }, 00:09:38.799 "memory_domains": [ 00:09:38.799 { 00:09:38.799 "dma_device_id": "system", 00:09:38.799 "dma_device_type": 1 00:09:38.799 }, 00:09:38.799 { 00:09:38.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.799 "dma_device_type": 2 00:09:38.799 } 00:09:38.799 ], 00:09:38.799 "driver_specific": {} 00:09:38.799 } 00:09:38.799 ] 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.799 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.799 "name": "Existed_Raid", 00:09:38.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.799 "strip_size_kb": 64, 00:09:38.799 "state": "configuring", 00:09:38.799 "raid_level": "raid0", 00:09:38.799 "superblock": false, 00:09:38.799 "num_base_bdevs": 4, 00:09:38.799 "num_base_bdevs_discovered": 3, 00:09:38.799 "num_base_bdevs_operational": 4, 00:09:38.799 "base_bdevs_list": [ 00:09:38.799 { 00:09:38.799 "name": "BaseBdev1", 00:09:38.799 "uuid": "57772e2b-05fe-49d4-bb1f-cc13eec83ebb", 00:09:38.799 "is_configured": true, 00:09:38.799 "data_offset": 0, 00:09:38.799 "data_size": 65536 00:09:38.800 }, 00:09:38.800 { 00:09:38.800 "name": null, 00:09:38.800 "uuid": "95a89d98-211e-49af-99b7-cef3b1eb265e", 00:09:38.800 "is_configured": false, 00:09:38.800 "data_offset": 0, 00:09:38.800 "data_size": 65536 00:09:38.800 }, 00:09:38.800 { 00:09:38.800 "name": "BaseBdev3", 00:09:38.800 "uuid": "3f1722e4-1f2d-4429-bb3b-11dbda84bcfc", 00:09:38.800 "is_configured": true, 00:09:38.800 "data_offset": 0, 00:09:38.800 "data_size": 65536 00:09:38.800 }, 00:09:38.800 { 00:09:38.800 "name": "BaseBdev4", 00:09:38.800 "uuid": "dbb39612-172f-4ee9-b426-f68f11918a96", 00:09:38.800 "is_configured": true, 00:09:38.800 "data_offset": 0, 00:09:38.800 "data_size": 65536 00:09:38.800 } 00:09:38.800 ] 00:09:38.800 }' 00:09:38.800 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.800 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.367 [2024-11-26 13:22:27.806896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.367 "name": "Existed_Raid", 00:09:39.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.367 "strip_size_kb": 64, 00:09:39.367 "state": "configuring", 00:09:39.367 "raid_level": "raid0", 00:09:39.367 "superblock": false, 00:09:39.367 "num_base_bdevs": 4, 00:09:39.367 "num_base_bdevs_discovered": 2, 00:09:39.367 "num_base_bdevs_operational": 4, 00:09:39.367 "base_bdevs_list": [ 00:09:39.367 { 00:09:39.367 "name": "BaseBdev1", 00:09:39.367 "uuid": "57772e2b-05fe-49d4-bb1f-cc13eec83ebb", 00:09:39.367 "is_configured": true, 00:09:39.367 "data_offset": 0, 00:09:39.367 "data_size": 65536 00:09:39.367 }, 00:09:39.367 { 00:09:39.367 "name": null, 00:09:39.367 "uuid": "95a89d98-211e-49af-99b7-cef3b1eb265e", 00:09:39.367 "is_configured": false, 00:09:39.367 "data_offset": 0, 00:09:39.367 "data_size": 65536 00:09:39.367 }, 00:09:39.367 { 00:09:39.367 "name": null, 00:09:39.367 "uuid": "3f1722e4-1f2d-4429-bb3b-11dbda84bcfc", 00:09:39.367 "is_configured": false, 00:09:39.367 "data_offset": 0, 00:09:39.367 "data_size": 65536 00:09:39.367 }, 00:09:39.367 { 00:09:39.367 "name": "BaseBdev4", 00:09:39.367 "uuid": "dbb39612-172f-4ee9-b426-f68f11918a96", 00:09:39.367 "is_configured": true, 00:09:39.367 "data_offset": 0, 00:09:39.367 "data_size": 65536 00:09:39.367 } 00:09:39.367 ] 00:09:39.367 }' 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.367 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.935 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.935 13:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.935 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.935 13:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.935 13:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.935 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:39.935 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:39.935 13:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.935 13:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.936 [2024-11-26 13:22:28.391023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.936 "name": "Existed_Raid", 00:09:39.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.936 "strip_size_kb": 64, 00:09:39.936 "state": "configuring", 00:09:39.936 "raid_level": "raid0", 00:09:39.936 "superblock": false, 00:09:39.936 "num_base_bdevs": 4, 00:09:39.936 "num_base_bdevs_discovered": 3, 00:09:39.936 "num_base_bdevs_operational": 4, 00:09:39.936 "base_bdevs_list": [ 00:09:39.936 { 00:09:39.936 "name": "BaseBdev1", 00:09:39.936 "uuid": "57772e2b-05fe-49d4-bb1f-cc13eec83ebb", 00:09:39.936 "is_configured": true, 00:09:39.936 "data_offset": 0, 00:09:39.936 "data_size": 65536 00:09:39.936 }, 00:09:39.936 { 00:09:39.936 "name": null, 00:09:39.936 "uuid": "95a89d98-211e-49af-99b7-cef3b1eb265e", 00:09:39.936 "is_configured": false, 00:09:39.936 "data_offset": 0, 00:09:39.936 "data_size": 65536 00:09:39.936 }, 00:09:39.936 { 00:09:39.936 "name": "BaseBdev3", 00:09:39.936 "uuid": "3f1722e4-1f2d-4429-bb3b-11dbda84bcfc", 00:09:39.936 "is_configured": true, 00:09:39.936 "data_offset": 0, 00:09:39.936 "data_size": 65536 00:09:39.936 }, 00:09:39.936 { 00:09:39.936 "name": "BaseBdev4", 00:09:39.936 "uuid": "dbb39612-172f-4ee9-b426-f68f11918a96", 00:09:39.936 "is_configured": true, 00:09:39.936 "data_offset": 0, 00:09:39.936 "data_size": 65536 00:09:39.936 } 00:09:39.936 ] 00:09:39.936 }' 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.936 13:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.504 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:40.504 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.504 13:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.504 13:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.504 13:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.504 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:40.504 13:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:40.504 13:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.504 13:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.504 [2024-11-26 13:22:28.975171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.504 13:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.768 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.768 "name": "Existed_Raid", 00:09:40.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.768 "strip_size_kb": 64, 00:09:40.768 "state": "configuring", 00:09:40.768 "raid_level": "raid0", 00:09:40.768 "superblock": false, 00:09:40.768 "num_base_bdevs": 4, 00:09:40.768 "num_base_bdevs_discovered": 2, 00:09:40.768 "num_base_bdevs_operational": 4, 00:09:40.768 "base_bdevs_list": [ 00:09:40.768 { 00:09:40.768 "name": null, 00:09:40.768 "uuid": "57772e2b-05fe-49d4-bb1f-cc13eec83ebb", 00:09:40.768 "is_configured": false, 00:09:40.768 "data_offset": 0, 00:09:40.768 "data_size": 65536 00:09:40.768 }, 00:09:40.768 { 00:09:40.768 "name": null, 00:09:40.768 "uuid": "95a89d98-211e-49af-99b7-cef3b1eb265e", 00:09:40.768 "is_configured": false, 00:09:40.768 "data_offset": 0, 00:09:40.768 "data_size": 65536 00:09:40.768 }, 00:09:40.768 { 00:09:40.768 "name": "BaseBdev3", 00:09:40.769 "uuid": "3f1722e4-1f2d-4429-bb3b-11dbda84bcfc", 00:09:40.769 "is_configured": true, 00:09:40.769 "data_offset": 0, 00:09:40.769 "data_size": 65536 00:09:40.769 }, 00:09:40.769 { 00:09:40.769 "name": "BaseBdev4", 00:09:40.769 "uuid": "dbb39612-172f-4ee9-b426-f68f11918a96", 00:09:40.769 "is_configured": true, 00:09:40.769 "data_offset": 0, 00:09:40.769 "data_size": 65536 00:09:40.769 } 00:09:40.769 ] 00:09:40.769 }' 00:09:40.769 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.769 13:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.030 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.030 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:41.030 13:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.030 13:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.030 13:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.289 [2024-11-26 13:22:29.617269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.289 13:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.290 13:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.290 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.290 "name": "Existed_Raid", 00:09:41.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.290 "strip_size_kb": 64, 00:09:41.290 "state": "configuring", 00:09:41.290 "raid_level": "raid0", 00:09:41.290 "superblock": false, 00:09:41.290 "num_base_bdevs": 4, 00:09:41.290 "num_base_bdevs_discovered": 3, 00:09:41.290 "num_base_bdevs_operational": 4, 00:09:41.290 "base_bdevs_list": [ 00:09:41.290 { 00:09:41.290 "name": null, 00:09:41.290 "uuid": "57772e2b-05fe-49d4-bb1f-cc13eec83ebb", 00:09:41.290 "is_configured": false, 00:09:41.290 "data_offset": 0, 00:09:41.290 "data_size": 65536 00:09:41.290 }, 00:09:41.290 { 00:09:41.290 "name": "BaseBdev2", 00:09:41.290 "uuid": "95a89d98-211e-49af-99b7-cef3b1eb265e", 00:09:41.290 "is_configured": true, 00:09:41.290 "data_offset": 0, 00:09:41.290 "data_size": 65536 00:09:41.290 }, 00:09:41.290 { 00:09:41.290 "name": "BaseBdev3", 00:09:41.290 "uuid": "3f1722e4-1f2d-4429-bb3b-11dbda84bcfc", 00:09:41.290 "is_configured": true, 00:09:41.290 "data_offset": 0, 00:09:41.290 "data_size": 65536 00:09:41.290 }, 00:09:41.290 { 00:09:41.290 "name": "BaseBdev4", 00:09:41.290 "uuid": "dbb39612-172f-4ee9-b426-f68f11918a96", 00:09:41.290 "is_configured": true, 00:09:41.290 "data_offset": 0, 00:09:41.290 "data_size": 65536 00:09:41.290 } 00:09:41.290 ] 00:09:41.290 }' 00:09:41.290 13:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.290 13:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.858 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.858 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:41.858 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.858 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.858 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.858 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:41.858 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.858 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:41.858 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 57772e2b-05fe-49d4-bb1f-cc13eec83ebb 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.859 [2024-11-26 13:22:30.290186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:41.859 [2024-11-26 13:22:30.290280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:41.859 [2024-11-26 13:22:30.290303] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:41.859 [2024-11-26 13:22:30.290604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:41.859 [2024-11-26 13:22:30.290782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:41.859 [2024-11-26 13:22:30.290802] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:41.859 [2024-11-26 13:22:30.291034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.859 NewBaseBdev 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.859 [ 00:09:41.859 { 00:09:41.859 "name": "NewBaseBdev", 00:09:41.859 "aliases": [ 00:09:41.859 "57772e2b-05fe-49d4-bb1f-cc13eec83ebb" 00:09:41.859 ], 00:09:41.859 "product_name": "Malloc disk", 00:09:41.859 "block_size": 512, 00:09:41.859 "num_blocks": 65536, 00:09:41.859 "uuid": "57772e2b-05fe-49d4-bb1f-cc13eec83ebb", 00:09:41.859 "assigned_rate_limits": { 00:09:41.859 "rw_ios_per_sec": 0, 00:09:41.859 "rw_mbytes_per_sec": 0, 00:09:41.859 "r_mbytes_per_sec": 0, 00:09:41.859 "w_mbytes_per_sec": 0 00:09:41.859 }, 00:09:41.859 "claimed": true, 00:09:41.859 "claim_type": "exclusive_write", 00:09:41.859 "zoned": false, 00:09:41.859 "supported_io_types": { 00:09:41.859 "read": true, 00:09:41.859 "write": true, 00:09:41.859 "unmap": true, 00:09:41.859 "flush": true, 00:09:41.859 "reset": true, 00:09:41.859 "nvme_admin": false, 00:09:41.859 "nvme_io": false, 00:09:41.859 "nvme_io_md": false, 00:09:41.859 "write_zeroes": true, 00:09:41.859 "zcopy": true, 00:09:41.859 "get_zone_info": false, 00:09:41.859 "zone_management": false, 00:09:41.859 "zone_append": false, 00:09:41.859 "compare": false, 00:09:41.859 "compare_and_write": false, 00:09:41.859 "abort": true, 00:09:41.859 "seek_hole": false, 00:09:41.859 "seek_data": false, 00:09:41.859 "copy": true, 00:09:41.859 "nvme_iov_md": false 00:09:41.859 }, 00:09:41.859 "memory_domains": [ 00:09:41.859 { 00:09:41.859 "dma_device_id": "system", 00:09:41.859 "dma_device_type": 1 00:09:41.859 }, 00:09:41.859 { 00:09:41.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.859 "dma_device_type": 2 00:09:41.859 } 00:09:41.859 ], 00:09:41.859 "driver_specific": {} 00:09:41.859 } 00:09:41.859 ] 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.859 "name": "Existed_Raid", 00:09:41.859 "uuid": "e65397de-e34b-44e3-aa1a-0b29e052f1b2", 00:09:41.859 "strip_size_kb": 64, 00:09:41.859 "state": "online", 00:09:41.859 "raid_level": "raid0", 00:09:41.859 "superblock": false, 00:09:41.859 "num_base_bdevs": 4, 00:09:41.859 "num_base_bdevs_discovered": 4, 00:09:41.859 "num_base_bdevs_operational": 4, 00:09:41.859 "base_bdevs_list": [ 00:09:41.859 { 00:09:41.859 "name": "NewBaseBdev", 00:09:41.859 "uuid": "57772e2b-05fe-49d4-bb1f-cc13eec83ebb", 00:09:41.859 "is_configured": true, 00:09:41.859 "data_offset": 0, 00:09:41.859 "data_size": 65536 00:09:41.859 }, 00:09:41.859 { 00:09:41.859 "name": "BaseBdev2", 00:09:41.859 "uuid": "95a89d98-211e-49af-99b7-cef3b1eb265e", 00:09:41.859 "is_configured": true, 00:09:41.859 "data_offset": 0, 00:09:41.859 "data_size": 65536 00:09:41.859 }, 00:09:41.859 { 00:09:41.859 "name": "BaseBdev3", 00:09:41.859 "uuid": "3f1722e4-1f2d-4429-bb3b-11dbda84bcfc", 00:09:41.859 "is_configured": true, 00:09:41.859 "data_offset": 0, 00:09:41.859 "data_size": 65536 00:09:41.859 }, 00:09:41.859 { 00:09:41.859 "name": "BaseBdev4", 00:09:41.859 "uuid": "dbb39612-172f-4ee9-b426-f68f11918a96", 00:09:41.859 "is_configured": true, 00:09:41.859 "data_offset": 0, 00:09:41.859 "data_size": 65536 00:09:41.859 } 00:09:41.859 ] 00:09:41.859 }' 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.859 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.427 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:42.427 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:42.427 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.427 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.427 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.427 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.427 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:42.427 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.427 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.427 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.427 [2024-11-26 13:22:30.882876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.427 13:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.427 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.427 "name": "Existed_Raid", 00:09:42.427 "aliases": [ 00:09:42.427 "e65397de-e34b-44e3-aa1a-0b29e052f1b2" 00:09:42.427 ], 00:09:42.427 "product_name": "Raid Volume", 00:09:42.427 "block_size": 512, 00:09:42.427 "num_blocks": 262144, 00:09:42.427 "uuid": "e65397de-e34b-44e3-aa1a-0b29e052f1b2", 00:09:42.427 "assigned_rate_limits": { 00:09:42.427 "rw_ios_per_sec": 0, 00:09:42.427 "rw_mbytes_per_sec": 0, 00:09:42.427 "r_mbytes_per_sec": 0, 00:09:42.427 "w_mbytes_per_sec": 0 00:09:42.427 }, 00:09:42.427 "claimed": false, 00:09:42.427 "zoned": false, 00:09:42.427 "supported_io_types": { 00:09:42.427 "read": true, 00:09:42.427 "write": true, 00:09:42.427 "unmap": true, 00:09:42.427 "flush": true, 00:09:42.427 "reset": true, 00:09:42.427 "nvme_admin": false, 00:09:42.427 "nvme_io": false, 00:09:42.427 "nvme_io_md": false, 00:09:42.427 "write_zeroes": true, 00:09:42.427 "zcopy": false, 00:09:42.427 "get_zone_info": false, 00:09:42.427 "zone_management": false, 00:09:42.427 "zone_append": false, 00:09:42.427 "compare": false, 00:09:42.427 "compare_and_write": false, 00:09:42.427 "abort": false, 00:09:42.427 "seek_hole": false, 00:09:42.427 "seek_data": false, 00:09:42.427 "copy": false, 00:09:42.427 "nvme_iov_md": false 00:09:42.427 }, 00:09:42.427 "memory_domains": [ 00:09:42.427 { 00:09:42.427 "dma_device_id": "system", 00:09:42.427 "dma_device_type": 1 00:09:42.427 }, 00:09:42.427 { 00:09:42.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.427 "dma_device_type": 2 00:09:42.427 }, 00:09:42.427 { 00:09:42.427 "dma_device_id": "system", 00:09:42.427 "dma_device_type": 1 00:09:42.427 }, 00:09:42.427 { 00:09:42.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.427 "dma_device_type": 2 00:09:42.427 }, 00:09:42.427 { 00:09:42.427 "dma_device_id": "system", 00:09:42.427 "dma_device_type": 1 00:09:42.427 }, 00:09:42.427 { 00:09:42.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.427 "dma_device_type": 2 00:09:42.427 }, 00:09:42.427 { 00:09:42.427 "dma_device_id": "system", 00:09:42.427 "dma_device_type": 1 00:09:42.427 }, 00:09:42.427 { 00:09:42.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.427 "dma_device_type": 2 00:09:42.427 } 00:09:42.427 ], 00:09:42.427 "driver_specific": { 00:09:42.427 "raid": { 00:09:42.427 "uuid": "e65397de-e34b-44e3-aa1a-0b29e052f1b2", 00:09:42.427 "strip_size_kb": 64, 00:09:42.427 "state": "online", 00:09:42.427 "raid_level": "raid0", 00:09:42.427 "superblock": false, 00:09:42.427 "num_base_bdevs": 4, 00:09:42.427 "num_base_bdevs_discovered": 4, 00:09:42.427 "num_base_bdevs_operational": 4, 00:09:42.427 "base_bdevs_list": [ 00:09:42.427 { 00:09:42.427 "name": "NewBaseBdev", 00:09:42.427 "uuid": "57772e2b-05fe-49d4-bb1f-cc13eec83ebb", 00:09:42.427 "is_configured": true, 00:09:42.427 "data_offset": 0, 00:09:42.427 "data_size": 65536 00:09:42.427 }, 00:09:42.427 { 00:09:42.427 "name": "BaseBdev2", 00:09:42.427 "uuid": "95a89d98-211e-49af-99b7-cef3b1eb265e", 00:09:42.428 "is_configured": true, 00:09:42.428 "data_offset": 0, 00:09:42.428 "data_size": 65536 00:09:42.428 }, 00:09:42.428 { 00:09:42.428 "name": "BaseBdev3", 00:09:42.428 "uuid": "3f1722e4-1f2d-4429-bb3b-11dbda84bcfc", 00:09:42.428 "is_configured": true, 00:09:42.428 "data_offset": 0, 00:09:42.428 "data_size": 65536 00:09:42.428 }, 00:09:42.428 { 00:09:42.428 "name": "BaseBdev4", 00:09:42.428 "uuid": "dbb39612-172f-4ee9-b426-f68f11918a96", 00:09:42.428 "is_configured": true, 00:09:42.428 "data_offset": 0, 00:09:42.428 "data_size": 65536 00:09:42.428 } 00:09:42.428 ] 00:09:42.428 } 00:09:42.428 } 00:09:42.428 }' 00:09:42.428 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.428 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:42.428 BaseBdev2 00:09:42.428 BaseBdev3 00:09:42.428 BaseBdev4' 00:09:42.428 13:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.687 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.946 [2024-11-26 13:22:31.254500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.946 [2024-11-26 13:22:31.254670] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.946 [2024-11-26 13:22:31.254765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.946 [2024-11-26 13:22:31.254832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.946 [2024-11-26 13:22:31.254846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 68912 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 68912 ']' 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 68912 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68912 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68912' 00:09:42.946 killing process with pid 68912 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 68912 00:09:42.946 [2024-11-26 13:22:31.295873] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.946 13:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 68912 00:09:43.205 [2024-11-26 13:22:31.561546] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:44.142 00:09:44.142 real 0m12.476s 00:09:44.142 user 0m21.051s 00:09:44.142 sys 0m1.795s 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.142 ************************************ 00:09:44.142 END TEST raid_state_function_test 00:09:44.142 ************************************ 00:09:44.142 13:22:32 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:44.142 13:22:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:44.142 13:22:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.142 13:22:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.142 ************************************ 00:09:44.142 START TEST raid_state_function_test_sb 00:09:44.142 ************************************ 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:44.142 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69591 00:09:44.143 Process raid pid: 69591 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69591' 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69591 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69591 ']' 00:09:44.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.143 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.143 [2024-11-26 13:22:32.559017] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:09:44.143 [2024-11-26 13:22:32.559150] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.402 [2024-11-26 13:22:32.723151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.402 [2024-11-26 13:22:32.827370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.661 [2024-11-26 13:22:33.004774] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.661 [2024-11-26 13:22:33.004805] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.921 [2024-11-26 13:22:33.470699] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.921 [2024-11-26 13:22:33.470754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.921 [2024-11-26 13:22:33.470768] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.921 [2024-11-26 13:22:33.470782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.921 [2024-11-26 13:22:33.470790] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.921 [2024-11-26 13:22:33.470801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.921 [2024-11-26 13:22:33.470809] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:44.921 [2024-11-26 13:22:33.470820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.921 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.180 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.180 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.180 "name": "Existed_Raid", 00:09:45.180 "uuid": "a98b385f-ae3c-4daa-9ea3-d3d1ccb3a233", 00:09:45.180 "strip_size_kb": 64, 00:09:45.180 "state": "configuring", 00:09:45.180 "raid_level": "raid0", 00:09:45.180 "superblock": true, 00:09:45.180 "num_base_bdevs": 4, 00:09:45.180 "num_base_bdevs_discovered": 0, 00:09:45.180 "num_base_bdevs_operational": 4, 00:09:45.180 "base_bdevs_list": [ 00:09:45.180 { 00:09:45.180 "name": "BaseBdev1", 00:09:45.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.180 "is_configured": false, 00:09:45.180 "data_offset": 0, 00:09:45.180 "data_size": 0 00:09:45.180 }, 00:09:45.180 { 00:09:45.180 "name": "BaseBdev2", 00:09:45.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.180 "is_configured": false, 00:09:45.180 "data_offset": 0, 00:09:45.180 "data_size": 0 00:09:45.180 }, 00:09:45.180 { 00:09:45.180 "name": "BaseBdev3", 00:09:45.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.180 "is_configured": false, 00:09:45.180 "data_offset": 0, 00:09:45.180 "data_size": 0 00:09:45.180 }, 00:09:45.180 { 00:09:45.180 "name": "BaseBdev4", 00:09:45.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.180 "is_configured": false, 00:09:45.180 "data_offset": 0, 00:09:45.180 "data_size": 0 00:09:45.180 } 00:09:45.180 ] 00:09:45.180 }' 00:09:45.180 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.180 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.440 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.440 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.699 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.699 [2024-11-26 13:22:34.010764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.699 [2024-11-26 13:22:34.010949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:45.699 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.699 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:45.699 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.700 [2024-11-26 13:22:34.022780] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.700 [2024-11-26 13:22:34.022841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.700 [2024-11-26 13:22:34.022855] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.700 [2024-11-26 13:22:34.022868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.700 [2024-11-26 13:22:34.022876] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.700 [2024-11-26 13:22:34.022888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.700 [2024-11-26 13:22:34.022896] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:45.700 [2024-11-26 13:22:34.022907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.700 [2024-11-26 13:22:34.061689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.700 BaseBdev1 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.700 [ 00:09:45.700 { 00:09:45.700 "name": "BaseBdev1", 00:09:45.700 "aliases": [ 00:09:45.700 "19f55d14-fd90-4640-8584-61c2811a8110" 00:09:45.700 ], 00:09:45.700 "product_name": "Malloc disk", 00:09:45.700 "block_size": 512, 00:09:45.700 "num_blocks": 65536, 00:09:45.700 "uuid": "19f55d14-fd90-4640-8584-61c2811a8110", 00:09:45.700 "assigned_rate_limits": { 00:09:45.700 "rw_ios_per_sec": 0, 00:09:45.700 "rw_mbytes_per_sec": 0, 00:09:45.700 "r_mbytes_per_sec": 0, 00:09:45.700 "w_mbytes_per_sec": 0 00:09:45.700 }, 00:09:45.700 "claimed": true, 00:09:45.700 "claim_type": "exclusive_write", 00:09:45.700 "zoned": false, 00:09:45.700 "supported_io_types": { 00:09:45.700 "read": true, 00:09:45.700 "write": true, 00:09:45.700 "unmap": true, 00:09:45.700 "flush": true, 00:09:45.700 "reset": true, 00:09:45.700 "nvme_admin": false, 00:09:45.700 "nvme_io": false, 00:09:45.700 "nvme_io_md": false, 00:09:45.700 "write_zeroes": true, 00:09:45.700 "zcopy": true, 00:09:45.700 "get_zone_info": false, 00:09:45.700 "zone_management": false, 00:09:45.700 "zone_append": false, 00:09:45.700 "compare": false, 00:09:45.700 "compare_and_write": false, 00:09:45.700 "abort": true, 00:09:45.700 "seek_hole": false, 00:09:45.700 "seek_data": false, 00:09:45.700 "copy": true, 00:09:45.700 "nvme_iov_md": false 00:09:45.700 }, 00:09:45.700 "memory_domains": [ 00:09:45.700 { 00:09:45.700 "dma_device_id": "system", 00:09:45.700 "dma_device_type": 1 00:09:45.700 }, 00:09:45.700 { 00:09:45.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.700 "dma_device_type": 2 00:09:45.700 } 00:09:45.700 ], 00:09:45.700 "driver_specific": {} 00:09:45.700 } 00:09:45.700 ] 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.700 "name": "Existed_Raid", 00:09:45.700 "uuid": "7942643a-1201-4931-a1ab-178a6b9e48a4", 00:09:45.700 "strip_size_kb": 64, 00:09:45.700 "state": "configuring", 00:09:45.700 "raid_level": "raid0", 00:09:45.700 "superblock": true, 00:09:45.700 "num_base_bdevs": 4, 00:09:45.700 "num_base_bdevs_discovered": 1, 00:09:45.700 "num_base_bdevs_operational": 4, 00:09:45.700 "base_bdevs_list": [ 00:09:45.700 { 00:09:45.700 "name": "BaseBdev1", 00:09:45.700 "uuid": "19f55d14-fd90-4640-8584-61c2811a8110", 00:09:45.700 "is_configured": true, 00:09:45.700 "data_offset": 2048, 00:09:45.700 "data_size": 63488 00:09:45.700 }, 00:09:45.700 { 00:09:45.700 "name": "BaseBdev2", 00:09:45.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.700 "is_configured": false, 00:09:45.700 "data_offset": 0, 00:09:45.700 "data_size": 0 00:09:45.700 }, 00:09:45.700 { 00:09:45.700 "name": "BaseBdev3", 00:09:45.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.700 "is_configured": false, 00:09:45.700 "data_offset": 0, 00:09:45.700 "data_size": 0 00:09:45.700 }, 00:09:45.700 { 00:09:45.700 "name": "BaseBdev4", 00:09:45.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.700 "is_configured": false, 00:09:45.700 "data_offset": 0, 00:09:45.700 "data_size": 0 00:09:45.700 } 00:09:45.700 ] 00:09:45.700 }' 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.700 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.268 [2024-11-26 13:22:34.617836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:46.268 [2024-11-26 13:22:34.617874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.268 [2024-11-26 13:22:34.629908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.268 [2024-11-26 13:22:34.632140] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.268 [2024-11-26 13:22:34.632360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.268 [2024-11-26 13:22:34.632479] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:46.268 [2024-11-26 13:22:34.632538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.268 [2024-11-26 13:22:34.632758] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:46.268 [2024-11-26 13:22:34.632821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.268 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.268 "name": "Existed_Raid", 00:09:46.268 "uuid": "3d4d849f-5be0-4d5d-be4c-9f46d61587da", 00:09:46.268 "strip_size_kb": 64, 00:09:46.268 "state": "configuring", 00:09:46.268 "raid_level": "raid0", 00:09:46.268 "superblock": true, 00:09:46.268 "num_base_bdevs": 4, 00:09:46.268 "num_base_bdevs_discovered": 1, 00:09:46.268 "num_base_bdevs_operational": 4, 00:09:46.269 "base_bdevs_list": [ 00:09:46.269 { 00:09:46.269 "name": "BaseBdev1", 00:09:46.269 "uuid": "19f55d14-fd90-4640-8584-61c2811a8110", 00:09:46.269 "is_configured": true, 00:09:46.269 "data_offset": 2048, 00:09:46.269 "data_size": 63488 00:09:46.269 }, 00:09:46.269 { 00:09:46.269 "name": "BaseBdev2", 00:09:46.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.269 "is_configured": false, 00:09:46.269 "data_offset": 0, 00:09:46.269 "data_size": 0 00:09:46.269 }, 00:09:46.269 { 00:09:46.269 "name": "BaseBdev3", 00:09:46.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.269 "is_configured": false, 00:09:46.269 "data_offset": 0, 00:09:46.269 "data_size": 0 00:09:46.269 }, 00:09:46.269 { 00:09:46.269 "name": "BaseBdev4", 00:09:46.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.269 "is_configured": false, 00:09:46.269 "data_offset": 0, 00:09:46.269 "data_size": 0 00:09:46.269 } 00:09:46.269 ] 00:09:46.269 }' 00:09:46.269 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.269 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.836 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:46.836 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.836 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.836 [2024-11-26 13:22:35.203204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.836 BaseBdev2 00:09:46.836 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.836 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:46.836 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:46.836 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:46.836 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:46.836 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.837 [ 00:09:46.837 { 00:09:46.837 "name": "BaseBdev2", 00:09:46.837 "aliases": [ 00:09:46.837 "bce638df-2c35-4708-8d2c-661a20e499d0" 00:09:46.837 ], 00:09:46.837 "product_name": "Malloc disk", 00:09:46.837 "block_size": 512, 00:09:46.837 "num_blocks": 65536, 00:09:46.837 "uuid": "bce638df-2c35-4708-8d2c-661a20e499d0", 00:09:46.837 "assigned_rate_limits": { 00:09:46.837 "rw_ios_per_sec": 0, 00:09:46.837 "rw_mbytes_per_sec": 0, 00:09:46.837 "r_mbytes_per_sec": 0, 00:09:46.837 "w_mbytes_per_sec": 0 00:09:46.837 }, 00:09:46.837 "claimed": true, 00:09:46.837 "claim_type": "exclusive_write", 00:09:46.837 "zoned": false, 00:09:46.837 "supported_io_types": { 00:09:46.837 "read": true, 00:09:46.837 "write": true, 00:09:46.837 "unmap": true, 00:09:46.837 "flush": true, 00:09:46.837 "reset": true, 00:09:46.837 "nvme_admin": false, 00:09:46.837 "nvme_io": false, 00:09:46.837 "nvme_io_md": false, 00:09:46.837 "write_zeroes": true, 00:09:46.837 "zcopy": true, 00:09:46.837 "get_zone_info": false, 00:09:46.837 "zone_management": false, 00:09:46.837 "zone_append": false, 00:09:46.837 "compare": false, 00:09:46.837 "compare_and_write": false, 00:09:46.837 "abort": true, 00:09:46.837 "seek_hole": false, 00:09:46.837 "seek_data": false, 00:09:46.837 "copy": true, 00:09:46.837 "nvme_iov_md": false 00:09:46.837 }, 00:09:46.837 "memory_domains": [ 00:09:46.837 { 00:09:46.837 "dma_device_id": "system", 00:09:46.837 "dma_device_type": 1 00:09:46.837 }, 00:09:46.837 { 00:09:46.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.837 "dma_device_type": 2 00:09:46.837 } 00:09:46.837 ], 00:09:46.837 "driver_specific": {} 00:09:46.837 } 00:09:46.837 ] 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.837 "name": "Existed_Raid", 00:09:46.837 "uuid": "3d4d849f-5be0-4d5d-be4c-9f46d61587da", 00:09:46.837 "strip_size_kb": 64, 00:09:46.837 "state": "configuring", 00:09:46.837 "raid_level": "raid0", 00:09:46.837 "superblock": true, 00:09:46.837 "num_base_bdevs": 4, 00:09:46.837 "num_base_bdevs_discovered": 2, 00:09:46.837 "num_base_bdevs_operational": 4, 00:09:46.837 "base_bdevs_list": [ 00:09:46.837 { 00:09:46.837 "name": "BaseBdev1", 00:09:46.837 "uuid": "19f55d14-fd90-4640-8584-61c2811a8110", 00:09:46.837 "is_configured": true, 00:09:46.837 "data_offset": 2048, 00:09:46.837 "data_size": 63488 00:09:46.837 }, 00:09:46.837 { 00:09:46.837 "name": "BaseBdev2", 00:09:46.837 "uuid": "bce638df-2c35-4708-8d2c-661a20e499d0", 00:09:46.837 "is_configured": true, 00:09:46.837 "data_offset": 2048, 00:09:46.837 "data_size": 63488 00:09:46.837 }, 00:09:46.837 { 00:09:46.837 "name": "BaseBdev3", 00:09:46.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.837 "is_configured": false, 00:09:46.837 "data_offset": 0, 00:09:46.837 "data_size": 0 00:09:46.837 }, 00:09:46.837 { 00:09:46.837 "name": "BaseBdev4", 00:09:46.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.837 "is_configured": false, 00:09:46.837 "data_offset": 0, 00:09:46.837 "data_size": 0 00:09:46.837 } 00:09:46.837 ] 00:09:46.837 }' 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.837 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.406 [2024-11-26 13:22:35.805029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.406 BaseBdev3 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.406 [ 00:09:47.406 { 00:09:47.406 "name": "BaseBdev3", 00:09:47.406 "aliases": [ 00:09:47.406 "44dbadbb-cb93-40d9-8a6a-62913b1a852d" 00:09:47.406 ], 00:09:47.406 "product_name": "Malloc disk", 00:09:47.406 "block_size": 512, 00:09:47.406 "num_blocks": 65536, 00:09:47.406 "uuid": "44dbadbb-cb93-40d9-8a6a-62913b1a852d", 00:09:47.406 "assigned_rate_limits": { 00:09:47.406 "rw_ios_per_sec": 0, 00:09:47.406 "rw_mbytes_per_sec": 0, 00:09:47.406 "r_mbytes_per_sec": 0, 00:09:47.406 "w_mbytes_per_sec": 0 00:09:47.406 }, 00:09:47.406 "claimed": true, 00:09:47.406 "claim_type": "exclusive_write", 00:09:47.406 "zoned": false, 00:09:47.406 "supported_io_types": { 00:09:47.406 "read": true, 00:09:47.406 "write": true, 00:09:47.406 "unmap": true, 00:09:47.406 "flush": true, 00:09:47.406 "reset": true, 00:09:47.406 "nvme_admin": false, 00:09:47.406 "nvme_io": false, 00:09:47.406 "nvme_io_md": false, 00:09:47.406 "write_zeroes": true, 00:09:47.406 "zcopy": true, 00:09:47.406 "get_zone_info": false, 00:09:47.406 "zone_management": false, 00:09:47.406 "zone_append": false, 00:09:47.406 "compare": false, 00:09:47.406 "compare_and_write": false, 00:09:47.406 "abort": true, 00:09:47.406 "seek_hole": false, 00:09:47.406 "seek_data": false, 00:09:47.406 "copy": true, 00:09:47.406 "nvme_iov_md": false 00:09:47.406 }, 00:09:47.406 "memory_domains": [ 00:09:47.406 { 00:09:47.406 "dma_device_id": "system", 00:09:47.406 "dma_device_type": 1 00:09:47.406 }, 00:09:47.406 { 00:09:47.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.406 "dma_device_type": 2 00:09:47.406 } 00:09:47.406 ], 00:09:47.406 "driver_specific": {} 00:09:47.406 } 00:09:47.406 ] 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.406 "name": "Existed_Raid", 00:09:47.406 "uuid": "3d4d849f-5be0-4d5d-be4c-9f46d61587da", 00:09:47.406 "strip_size_kb": 64, 00:09:47.406 "state": "configuring", 00:09:47.406 "raid_level": "raid0", 00:09:47.406 "superblock": true, 00:09:47.406 "num_base_bdevs": 4, 00:09:47.406 "num_base_bdevs_discovered": 3, 00:09:47.406 "num_base_bdevs_operational": 4, 00:09:47.406 "base_bdevs_list": [ 00:09:47.406 { 00:09:47.406 "name": "BaseBdev1", 00:09:47.406 "uuid": "19f55d14-fd90-4640-8584-61c2811a8110", 00:09:47.406 "is_configured": true, 00:09:47.406 "data_offset": 2048, 00:09:47.406 "data_size": 63488 00:09:47.406 }, 00:09:47.406 { 00:09:47.406 "name": "BaseBdev2", 00:09:47.406 "uuid": "bce638df-2c35-4708-8d2c-661a20e499d0", 00:09:47.406 "is_configured": true, 00:09:47.406 "data_offset": 2048, 00:09:47.406 "data_size": 63488 00:09:47.406 }, 00:09:47.406 { 00:09:47.406 "name": "BaseBdev3", 00:09:47.406 "uuid": "44dbadbb-cb93-40d9-8a6a-62913b1a852d", 00:09:47.406 "is_configured": true, 00:09:47.406 "data_offset": 2048, 00:09:47.406 "data_size": 63488 00:09:47.406 }, 00:09:47.406 { 00:09:47.406 "name": "BaseBdev4", 00:09:47.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.406 "is_configured": false, 00:09:47.406 "data_offset": 0, 00:09:47.406 "data_size": 0 00:09:47.406 } 00:09:47.406 ] 00:09:47.406 }' 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.406 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.975 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.976 [2024-11-26 13:22:36.394153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:47.976 BaseBdev4 00:09:47.976 [2024-11-26 13:22:36.394491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:47.976 [2024-11-26 13:22:36.394510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:47.976 [2024-11-26 13:22:36.394841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:47.976 [2024-11-26 13:22:36.395028] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:47.976 [2024-11-26 13:22:36.395048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:47.976 [2024-11-26 13:22:36.395197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.976 [ 00:09:47.976 { 00:09:47.976 "name": "BaseBdev4", 00:09:47.976 "aliases": [ 00:09:47.976 "8c44ab50-355f-4671-9002-e0bdd0095fd9" 00:09:47.976 ], 00:09:47.976 "product_name": "Malloc disk", 00:09:47.976 "block_size": 512, 00:09:47.976 "num_blocks": 65536, 00:09:47.976 "uuid": "8c44ab50-355f-4671-9002-e0bdd0095fd9", 00:09:47.976 "assigned_rate_limits": { 00:09:47.976 "rw_ios_per_sec": 0, 00:09:47.976 "rw_mbytes_per_sec": 0, 00:09:47.976 "r_mbytes_per_sec": 0, 00:09:47.976 "w_mbytes_per_sec": 0 00:09:47.976 }, 00:09:47.976 "claimed": true, 00:09:47.976 "claim_type": "exclusive_write", 00:09:47.976 "zoned": false, 00:09:47.976 "supported_io_types": { 00:09:47.976 "read": true, 00:09:47.976 "write": true, 00:09:47.976 "unmap": true, 00:09:47.976 "flush": true, 00:09:47.976 "reset": true, 00:09:47.976 "nvme_admin": false, 00:09:47.976 "nvme_io": false, 00:09:47.976 "nvme_io_md": false, 00:09:47.976 "write_zeroes": true, 00:09:47.976 "zcopy": true, 00:09:47.976 "get_zone_info": false, 00:09:47.976 "zone_management": false, 00:09:47.976 "zone_append": false, 00:09:47.976 "compare": false, 00:09:47.976 "compare_and_write": false, 00:09:47.976 "abort": true, 00:09:47.976 "seek_hole": false, 00:09:47.976 "seek_data": false, 00:09:47.976 "copy": true, 00:09:47.976 "nvme_iov_md": false 00:09:47.976 }, 00:09:47.976 "memory_domains": [ 00:09:47.976 { 00:09:47.976 "dma_device_id": "system", 00:09:47.976 "dma_device_type": 1 00:09:47.976 }, 00:09:47.976 { 00:09:47.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.976 "dma_device_type": 2 00:09:47.976 } 00:09:47.976 ], 00:09:47.976 "driver_specific": {} 00:09:47.976 } 00:09:47.976 ] 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.976 "name": "Existed_Raid", 00:09:47.976 "uuid": "3d4d849f-5be0-4d5d-be4c-9f46d61587da", 00:09:47.976 "strip_size_kb": 64, 00:09:47.976 "state": "online", 00:09:47.976 "raid_level": "raid0", 00:09:47.976 "superblock": true, 00:09:47.976 "num_base_bdevs": 4, 00:09:47.976 "num_base_bdevs_discovered": 4, 00:09:47.976 "num_base_bdevs_operational": 4, 00:09:47.976 "base_bdevs_list": [ 00:09:47.976 { 00:09:47.976 "name": "BaseBdev1", 00:09:47.976 "uuid": "19f55d14-fd90-4640-8584-61c2811a8110", 00:09:47.976 "is_configured": true, 00:09:47.976 "data_offset": 2048, 00:09:47.976 "data_size": 63488 00:09:47.976 }, 00:09:47.976 { 00:09:47.976 "name": "BaseBdev2", 00:09:47.976 "uuid": "bce638df-2c35-4708-8d2c-661a20e499d0", 00:09:47.976 "is_configured": true, 00:09:47.976 "data_offset": 2048, 00:09:47.976 "data_size": 63488 00:09:47.976 }, 00:09:47.976 { 00:09:47.976 "name": "BaseBdev3", 00:09:47.976 "uuid": "44dbadbb-cb93-40d9-8a6a-62913b1a852d", 00:09:47.976 "is_configured": true, 00:09:47.976 "data_offset": 2048, 00:09:47.976 "data_size": 63488 00:09:47.976 }, 00:09:47.976 { 00:09:47.976 "name": "BaseBdev4", 00:09:47.976 "uuid": "8c44ab50-355f-4671-9002-e0bdd0095fd9", 00:09:47.976 "is_configured": true, 00:09:47.976 "data_offset": 2048, 00:09:47.976 "data_size": 63488 00:09:47.976 } 00:09:47.976 ] 00:09:47.976 }' 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.976 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.544 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:48.544 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:48.544 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.544 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.544 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.544 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.544 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:48.544 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.544 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.544 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.544 [2024-11-26 13:22:36.958688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.544 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.544 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.544 "name": "Existed_Raid", 00:09:48.544 "aliases": [ 00:09:48.544 "3d4d849f-5be0-4d5d-be4c-9f46d61587da" 00:09:48.544 ], 00:09:48.544 "product_name": "Raid Volume", 00:09:48.544 "block_size": 512, 00:09:48.544 "num_blocks": 253952, 00:09:48.544 "uuid": "3d4d849f-5be0-4d5d-be4c-9f46d61587da", 00:09:48.544 "assigned_rate_limits": { 00:09:48.544 "rw_ios_per_sec": 0, 00:09:48.544 "rw_mbytes_per_sec": 0, 00:09:48.544 "r_mbytes_per_sec": 0, 00:09:48.544 "w_mbytes_per_sec": 0 00:09:48.544 }, 00:09:48.544 "claimed": false, 00:09:48.544 "zoned": false, 00:09:48.544 "supported_io_types": { 00:09:48.544 "read": true, 00:09:48.544 "write": true, 00:09:48.544 "unmap": true, 00:09:48.544 "flush": true, 00:09:48.544 "reset": true, 00:09:48.544 "nvme_admin": false, 00:09:48.544 "nvme_io": false, 00:09:48.544 "nvme_io_md": false, 00:09:48.544 "write_zeroes": true, 00:09:48.544 "zcopy": false, 00:09:48.544 "get_zone_info": false, 00:09:48.544 "zone_management": false, 00:09:48.544 "zone_append": false, 00:09:48.544 "compare": false, 00:09:48.544 "compare_and_write": false, 00:09:48.544 "abort": false, 00:09:48.544 "seek_hole": false, 00:09:48.544 "seek_data": false, 00:09:48.544 "copy": false, 00:09:48.544 "nvme_iov_md": false 00:09:48.544 }, 00:09:48.544 "memory_domains": [ 00:09:48.544 { 00:09:48.544 "dma_device_id": "system", 00:09:48.544 "dma_device_type": 1 00:09:48.544 }, 00:09:48.544 { 00:09:48.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.544 "dma_device_type": 2 00:09:48.544 }, 00:09:48.544 { 00:09:48.544 "dma_device_id": "system", 00:09:48.544 "dma_device_type": 1 00:09:48.544 }, 00:09:48.544 { 00:09:48.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.544 "dma_device_type": 2 00:09:48.544 }, 00:09:48.544 { 00:09:48.544 "dma_device_id": "system", 00:09:48.544 "dma_device_type": 1 00:09:48.544 }, 00:09:48.544 { 00:09:48.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.544 "dma_device_type": 2 00:09:48.544 }, 00:09:48.544 { 00:09:48.545 "dma_device_id": "system", 00:09:48.545 "dma_device_type": 1 00:09:48.545 }, 00:09:48.545 { 00:09:48.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.545 "dma_device_type": 2 00:09:48.545 } 00:09:48.545 ], 00:09:48.545 "driver_specific": { 00:09:48.545 "raid": { 00:09:48.545 "uuid": "3d4d849f-5be0-4d5d-be4c-9f46d61587da", 00:09:48.545 "strip_size_kb": 64, 00:09:48.545 "state": "online", 00:09:48.545 "raid_level": "raid0", 00:09:48.545 "superblock": true, 00:09:48.545 "num_base_bdevs": 4, 00:09:48.545 "num_base_bdevs_discovered": 4, 00:09:48.545 "num_base_bdevs_operational": 4, 00:09:48.545 "base_bdevs_list": [ 00:09:48.545 { 00:09:48.545 "name": "BaseBdev1", 00:09:48.545 "uuid": "19f55d14-fd90-4640-8584-61c2811a8110", 00:09:48.545 "is_configured": true, 00:09:48.545 "data_offset": 2048, 00:09:48.545 "data_size": 63488 00:09:48.545 }, 00:09:48.545 { 00:09:48.545 "name": "BaseBdev2", 00:09:48.545 "uuid": "bce638df-2c35-4708-8d2c-661a20e499d0", 00:09:48.545 "is_configured": true, 00:09:48.545 "data_offset": 2048, 00:09:48.545 "data_size": 63488 00:09:48.545 }, 00:09:48.545 { 00:09:48.545 "name": "BaseBdev3", 00:09:48.545 "uuid": "44dbadbb-cb93-40d9-8a6a-62913b1a852d", 00:09:48.545 "is_configured": true, 00:09:48.545 "data_offset": 2048, 00:09:48.545 "data_size": 63488 00:09:48.545 }, 00:09:48.545 { 00:09:48.545 "name": "BaseBdev4", 00:09:48.545 "uuid": "8c44ab50-355f-4671-9002-e0bdd0095fd9", 00:09:48.545 "is_configured": true, 00:09:48.545 "data_offset": 2048, 00:09:48.545 "data_size": 63488 00:09:48.545 } 00:09:48.545 ] 00:09:48.545 } 00:09:48.545 } 00:09:48.545 }' 00:09:48.545 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.545 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:48.545 BaseBdev2 00:09:48.545 BaseBdev3 00:09:48.545 BaseBdev4' 00:09:48.545 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.804 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.804 [2024-11-26 13:22:37.330529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:48.804 [2024-11-26 13:22:37.330560] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.804 [2024-11-26 13:22:37.330655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.062 "name": "Existed_Raid", 00:09:49.062 "uuid": "3d4d849f-5be0-4d5d-be4c-9f46d61587da", 00:09:49.062 "strip_size_kb": 64, 00:09:49.062 "state": "offline", 00:09:49.062 "raid_level": "raid0", 00:09:49.062 "superblock": true, 00:09:49.062 "num_base_bdevs": 4, 00:09:49.062 "num_base_bdevs_discovered": 3, 00:09:49.062 "num_base_bdevs_operational": 3, 00:09:49.062 "base_bdevs_list": [ 00:09:49.062 { 00:09:49.062 "name": null, 00:09:49.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.062 "is_configured": false, 00:09:49.062 "data_offset": 0, 00:09:49.062 "data_size": 63488 00:09:49.062 }, 00:09:49.062 { 00:09:49.062 "name": "BaseBdev2", 00:09:49.062 "uuid": "bce638df-2c35-4708-8d2c-661a20e499d0", 00:09:49.062 "is_configured": true, 00:09:49.062 "data_offset": 2048, 00:09:49.062 "data_size": 63488 00:09:49.062 }, 00:09:49.062 { 00:09:49.062 "name": "BaseBdev3", 00:09:49.062 "uuid": "44dbadbb-cb93-40d9-8a6a-62913b1a852d", 00:09:49.062 "is_configured": true, 00:09:49.062 "data_offset": 2048, 00:09:49.062 "data_size": 63488 00:09:49.062 }, 00:09:49.062 { 00:09:49.062 "name": "BaseBdev4", 00:09:49.062 "uuid": "8c44ab50-355f-4671-9002-e0bdd0095fd9", 00:09:49.062 "is_configured": true, 00:09:49.062 "data_offset": 2048, 00:09:49.062 "data_size": 63488 00:09:49.062 } 00:09:49.062 ] 00:09:49.062 }' 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.062 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.628 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:49.628 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.628 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:49.628 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.628 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.628 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.628 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.628 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.628 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.628 13:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:49.628 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.628 13:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.628 [2024-11-26 13:22:37.992442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.628 [2024-11-26 13:22:38.118910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.628 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.908 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.908 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.908 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.908 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:49.908 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.908 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.908 [2024-11-26 13:22:38.243784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:49.908 [2024-11-26 13:22:38.243989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:49.908 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.908 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.908 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.908 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.908 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:49.908 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.908 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.909 BaseBdev2 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.909 [ 00:09:49.909 { 00:09:49.909 "name": "BaseBdev2", 00:09:49.909 "aliases": [ 00:09:49.909 "95609c0f-2601-446f-a3e8-fec82680215b" 00:09:49.909 ], 00:09:49.909 "product_name": "Malloc disk", 00:09:49.909 "block_size": 512, 00:09:49.909 "num_blocks": 65536, 00:09:49.909 "uuid": "95609c0f-2601-446f-a3e8-fec82680215b", 00:09:49.909 "assigned_rate_limits": { 00:09:49.909 "rw_ios_per_sec": 0, 00:09:49.909 "rw_mbytes_per_sec": 0, 00:09:49.909 "r_mbytes_per_sec": 0, 00:09:49.909 "w_mbytes_per_sec": 0 00:09:49.909 }, 00:09:49.909 "claimed": false, 00:09:49.909 "zoned": false, 00:09:49.909 "supported_io_types": { 00:09:49.909 "read": true, 00:09:49.909 "write": true, 00:09:49.909 "unmap": true, 00:09:49.909 "flush": true, 00:09:49.909 "reset": true, 00:09:49.909 "nvme_admin": false, 00:09:49.909 "nvme_io": false, 00:09:49.909 "nvme_io_md": false, 00:09:49.909 "write_zeroes": true, 00:09:49.909 "zcopy": true, 00:09:49.909 "get_zone_info": false, 00:09:49.909 "zone_management": false, 00:09:49.909 "zone_append": false, 00:09:49.909 "compare": false, 00:09:49.909 "compare_and_write": false, 00:09:49.909 "abort": true, 00:09:49.909 "seek_hole": false, 00:09:49.909 "seek_data": false, 00:09:49.909 "copy": true, 00:09:49.909 "nvme_iov_md": false 00:09:49.909 }, 00:09:49.909 "memory_domains": [ 00:09:49.909 { 00:09:49.909 "dma_device_id": "system", 00:09:49.909 "dma_device_type": 1 00:09:49.909 }, 00:09:49.909 { 00:09:49.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.909 "dma_device_type": 2 00:09:49.909 } 00:09:49.909 ], 00:09:49.909 "driver_specific": {} 00:09:49.909 } 00:09:49.909 ] 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.909 BaseBdev3 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.909 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.168 [ 00:09:50.168 { 00:09:50.168 "name": "BaseBdev3", 00:09:50.168 "aliases": [ 00:09:50.168 "b3f3a701-5553-4672-8a13-9c8547822681" 00:09:50.168 ], 00:09:50.168 "product_name": "Malloc disk", 00:09:50.168 "block_size": 512, 00:09:50.168 "num_blocks": 65536, 00:09:50.168 "uuid": "b3f3a701-5553-4672-8a13-9c8547822681", 00:09:50.168 "assigned_rate_limits": { 00:09:50.168 "rw_ios_per_sec": 0, 00:09:50.168 "rw_mbytes_per_sec": 0, 00:09:50.168 "r_mbytes_per_sec": 0, 00:09:50.168 "w_mbytes_per_sec": 0 00:09:50.168 }, 00:09:50.168 "claimed": false, 00:09:50.168 "zoned": false, 00:09:50.168 "supported_io_types": { 00:09:50.168 "read": true, 00:09:50.168 "write": true, 00:09:50.168 "unmap": true, 00:09:50.168 "flush": true, 00:09:50.168 "reset": true, 00:09:50.168 "nvme_admin": false, 00:09:50.168 "nvme_io": false, 00:09:50.168 "nvme_io_md": false, 00:09:50.168 "write_zeroes": true, 00:09:50.168 "zcopy": true, 00:09:50.168 "get_zone_info": false, 00:09:50.168 "zone_management": false, 00:09:50.168 "zone_append": false, 00:09:50.168 "compare": false, 00:09:50.168 "compare_and_write": false, 00:09:50.168 "abort": true, 00:09:50.168 "seek_hole": false, 00:09:50.168 "seek_data": false, 00:09:50.168 "copy": true, 00:09:50.168 "nvme_iov_md": false 00:09:50.168 }, 00:09:50.168 "memory_domains": [ 00:09:50.168 { 00:09:50.168 "dma_device_id": "system", 00:09:50.168 "dma_device_type": 1 00:09:50.168 }, 00:09:50.168 { 00:09:50.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.168 "dma_device_type": 2 00:09:50.168 } 00:09:50.168 ], 00:09:50.168 "driver_specific": {} 00:09:50.168 } 00:09:50.168 ] 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.168 BaseBdev4 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.168 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.169 [ 00:09:50.169 { 00:09:50.169 "name": "BaseBdev4", 00:09:50.169 "aliases": [ 00:09:50.169 "242d9d9f-77f4-42cb-baee-e3c71a865910" 00:09:50.169 ], 00:09:50.169 "product_name": "Malloc disk", 00:09:50.169 "block_size": 512, 00:09:50.169 "num_blocks": 65536, 00:09:50.169 "uuid": "242d9d9f-77f4-42cb-baee-e3c71a865910", 00:09:50.169 "assigned_rate_limits": { 00:09:50.169 "rw_ios_per_sec": 0, 00:09:50.169 "rw_mbytes_per_sec": 0, 00:09:50.169 "r_mbytes_per_sec": 0, 00:09:50.169 "w_mbytes_per_sec": 0 00:09:50.169 }, 00:09:50.169 "claimed": false, 00:09:50.169 "zoned": false, 00:09:50.169 "supported_io_types": { 00:09:50.169 "read": true, 00:09:50.169 "write": true, 00:09:50.169 "unmap": true, 00:09:50.169 "flush": true, 00:09:50.169 "reset": true, 00:09:50.169 "nvme_admin": false, 00:09:50.169 "nvme_io": false, 00:09:50.169 "nvme_io_md": false, 00:09:50.169 "write_zeroes": true, 00:09:50.169 "zcopy": true, 00:09:50.169 "get_zone_info": false, 00:09:50.169 "zone_management": false, 00:09:50.169 "zone_append": false, 00:09:50.169 "compare": false, 00:09:50.169 "compare_and_write": false, 00:09:50.169 "abort": true, 00:09:50.169 "seek_hole": false, 00:09:50.169 "seek_data": false, 00:09:50.169 "copy": true, 00:09:50.169 "nvme_iov_md": false 00:09:50.169 }, 00:09:50.169 "memory_domains": [ 00:09:50.169 { 00:09:50.169 "dma_device_id": "system", 00:09:50.169 "dma_device_type": 1 00:09:50.169 }, 00:09:50.169 { 00:09:50.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.169 "dma_device_type": 2 00:09:50.169 } 00:09:50.169 ], 00:09:50.169 "driver_specific": {} 00:09:50.169 } 00:09:50.169 ] 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.169 [2024-11-26 13:22:38.575677] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.169 [2024-11-26 13:22:38.575930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.169 [2024-11-26 13:22:38.575972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.169 [2024-11-26 13:22:38.578156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.169 [2024-11-26 13:22:38.578220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.169 "name": "Existed_Raid", 00:09:50.169 "uuid": "a167e813-a866-4ce0-89a3-e22539a6be34", 00:09:50.169 "strip_size_kb": 64, 00:09:50.169 "state": "configuring", 00:09:50.169 "raid_level": "raid0", 00:09:50.169 "superblock": true, 00:09:50.169 "num_base_bdevs": 4, 00:09:50.169 "num_base_bdevs_discovered": 3, 00:09:50.169 "num_base_bdevs_operational": 4, 00:09:50.169 "base_bdevs_list": [ 00:09:50.169 { 00:09:50.169 "name": "BaseBdev1", 00:09:50.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.169 "is_configured": false, 00:09:50.169 "data_offset": 0, 00:09:50.169 "data_size": 0 00:09:50.169 }, 00:09:50.169 { 00:09:50.169 "name": "BaseBdev2", 00:09:50.169 "uuid": "95609c0f-2601-446f-a3e8-fec82680215b", 00:09:50.169 "is_configured": true, 00:09:50.169 "data_offset": 2048, 00:09:50.169 "data_size": 63488 00:09:50.169 }, 00:09:50.169 { 00:09:50.169 "name": "BaseBdev3", 00:09:50.169 "uuid": "b3f3a701-5553-4672-8a13-9c8547822681", 00:09:50.169 "is_configured": true, 00:09:50.169 "data_offset": 2048, 00:09:50.169 "data_size": 63488 00:09:50.169 }, 00:09:50.169 { 00:09:50.169 "name": "BaseBdev4", 00:09:50.169 "uuid": "242d9d9f-77f4-42cb-baee-e3c71a865910", 00:09:50.169 "is_configured": true, 00:09:50.169 "data_offset": 2048, 00:09:50.169 "data_size": 63488 00:09:50.169 } 00:09:50.169 ] 00:09:50.169 }' 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.169 13:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.737 [2024-11-26 13:22:39.107789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.737 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.737 "name": "Existed_Raid", 00:09:50.737 "uuid": "a167e813-a866-4ce0-89a3-e22539a6be34", 00:09:50.737 "strip_size_kb": 64, 00:09:50.737 "state": "configuring", 00:09:50.737 "raid_level": "raid0", 00:09:50.737 "superblock": true, 00:09:50.737 "num_base_bdevs": 4, 00:09:50.737 "num_base_bdevs_discovered": 2, 00:09:50.737 "num_base_bdevs_operational": 4, 00:09:50.737 "base_bdevs_list": [ 00:09:50.737 { 00:09:50.737 "name": "BaseBdev1", 00:09:50.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.737 "is_configured": false, 00:09:50.737 "data_offset": 0, 00:09:50.737 "data_size": 0 00:09:50.737 }, 00:09:50.737 { 00:09:50.737 "name": null, 00:09:50.737 "uuid": "95609c0f-2601-446f-a3e8-fec82680215b", 00:09:50.737 "is_configured": false, 00:09:50.737 "data_offset": 0, 00:09:50.737 "data_size": 63488 00:09:50.737 }, 00:09:50.738 { 00:09:50.738 "name": "BaseBdev3", 00:09:50.738 "uuid": "b3f3a701-5553-4672-8a13-9c8547822681", 00:09:50.738 "is_configured": true, 00:09:50.738 "data_offset": 2048, 00:09:50.738 "data_size": 63488 00:09:50.738 }, 00:09:50.738 { 00:09:50.738 "name": "BaseBdev4", 00:09:50.738 "uuid": "242d9d9f-77f4-42cb-baee-e3c71a865910", 00:09:50.738 "is_configured": true, 00:09:50.738 "data_offset": 2048, 00:09:50.738 "data_size": 63488 00:09:50.738 } 00:09:50.738 ] 00:09:50.738 }' 00:09:50.738 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.738 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.303 [2024-11-26 13:22:39.716459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.303 BaseBdev1 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.303 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.304 [ 00:09:51.304 { 00:09:51.304 "name": "BaseBdev1", 00:09:51.304 "aliases": [ 00:09:51.304 "fc297064-631e-49db-bd45-4c55f5e37290" 00:09:51.304 ], 00:09:51.304 "product_name": "Malloc disk", 00:09:51.304 "block_size": 512, 00:09:51.304 "num_blocks": 65536, 00:09:51.304 "uuid": "fc297064-631e-49db-bd45-4c55f5e37290", 00:09:51.304 "assigned_rate_limits": { 00:09:51.304 "rw_ios_per_sec": 0, 00:09:51.304 "rw_mbytes_per_sec": 0, 00:09:51.304 "r_mbytes_per_sec": 0, 00:09:51.304 "w_mbytes_per_sec": 0 00:09:51.304 }, 00:09:51.304 "claimed": true, 00:09:51.304 "claim_type": "exclusive_write", 00:09:51.304 "zoned": false, 00:09:51.304 "supported_io_types": { 00:09:51.304 "read": true, 00:09:51.304 "write": true, 00:09:51.304 "unmap": true, 00:09:51.304 "flush": true, 00:09:51.304 "reset": true, 00:09:51.304 "nvme_admin": false, 00:09:51.304 "nvme_io": false, 00:09:51.304 "nvme_io_md": false, 00:09:51.304 "write_zeroes": true, 00:09:51.304 "zcopy": true, 00:09:51.304 "get_zone_info": false, 00:09:51.304 "zone_management": false, 00:09:51.304 "zone_append": false, 00:09:51.304 "compare": false, 00:09:51.304 "compare_and_write": false, 00:09:51.304 "abort": true, 00:09:51.304 "seek_hole": false, 00:09:51.304 "seek_data": false, 00:09:51.304 "copy": true, 00:09:51.304 "nvme_iov_md": false 00:09:51.304 }, 00:09:51.304 "memory_domains": [ 00:09:51.304 { 00:09:51.304 "dma_device_id": "system", 00:09:51.304 "dma_device_type": 1 00:09:51.304 }, 00:09:51.304 { 00:09:51.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.304 "dma_device_type": 2 00:09:51.304 } 00:09:51.304 ], 00:09:51.304 "driver_specific": {} 00:09:51.304 } 00:09:51.304 ] 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.304 "name": "Existed_Raid", 00:09:51.304 "uuid": "a167e813-a866-4ce0-89a3-e22539a6be34", 00:09:51.304 "strip_size_kb": 64, 00:09:51.304 "state": "configuring", 00:09:51.304 "raid_level": "raid0", 00:09:51.304 "superblock": true, 00:09:51.304 "num_base_bdevs": 4, 00:09:51.304 "num_base_bdevs_discovered": 3, 00:09:51.304 "num_base_bdevs_operational": 4, 00:09:51.304 "base_bdevs_list": [ 00:09:51.304 { 00:09:51.304 "name": "BaseBdev1", 00:09:51.304 "uuid": "fc297064-631e-49db-bd45-4c55f5e37290", 00:09:51.304 "is_configured": true, 00:09:51.304 "data_offset": 2048, 00:09:51.304 "data_size": 63488 00:09:51.304 }, 00:09:51.304 { 00:09:51.304 "name": null, 00:09:51.304 "uuid": "95609c0f-2601-446f-a3e8-fec82680215b", 00:09:51.304 "is_configured": false, 00:09:51.304 "data_offset": 0, 00:09:51.304 "data_size": 63488 00:09:51.304 }, 00:09:51.304 { 00:09:51.304 "name": "BaseBdev3", 00:09:51.304 "uuid": "b3f3a701-5553-4672-8a13-9c8547822681", 00:09:51.304 "is_configured": true, 00:09:51.304 "data_offset": 2048, 00:09:51.304 "data_size": 63488 00:09:51.304 }, 00:09:51.304 { 00:09:51.304 "name": "BaseBdev4", 00:09:51.304 "uuid": "242d9d9f-77f4-42cb-baee-e3c71a865910", 00:09:51.304 "is_configured": true, 00:09:51.304 "data_offset": 2048, 00:09:51.304 "data_size": 63488 00:09:51.304 } 00:09:51.304 ] 00:09:51.304 }' 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.304 13:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.871 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.871 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:51.871 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.871 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.871 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.871 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:51.871 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:51.871 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.871 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.871 [2024-11-26 13:22:40.324647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:51.871 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.871 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.872 "name": "Existed_Raid", 00:09:51.872 "uuid": "a167e813-a866-4ce0-89a3-e22539a6be34", 00:09:51.872 "strip_size_kb": 64, 00:09:51.872 "state": "configuring", 00:09:51.872 "raid_level": "raid0", 00:09:51.872 "superblock": true, 00:09:51.872 "num_base_bdevs": 4, 00:09:51.872 "num_base_bdevs_discovered": 2, 00:09:51.872 "num_base_bdevs_operational": 4, 00:09:51.872 "base_bdevs_list": [ 00:09:51.872 { 00:09:51.872 "name": "BaseBdev1", 00:09:51.872 "uuid": "fc297064-631e-49db-bd45-4c55f5e37290", 00:09:51.872 "is_configured": true, 00:09:51.872 "data_offset": 2048, 00:09:51.872 "data_size": 63488 00:09:51.872 }, 00:09:51.872 { 00:09:51.872 "name": null, 00:09:51.872 "uuid": "95609c0f-2601-446f-a3e8-fec82680215b", 00:09:51.872 "is_configured": false, 00:09:51.872 "data_offset": 0, 00:09:51.872 "data_size": 63488 00:09:51.872 }, 00:09:51.872 { 00:09:51.872 "name": null, 00:09:51.872 "uuid": "b3f3a701-5553-4672-8a13-9c8547822681", 00:09:51.872 "is_configured": false, 00:09:51.872 "data_offset": 0, 00:09:51.872 "data_size": 63488 00:09:51.872 }, 00:09:51.872 { 00:09:51.872 "name": "BaseBdev4", 00:09:51.872 "uuid": "242d9d9f-77f4-42cb-baee-e3c71a865910", 00:09:51.872 "is_configured": true, 00:09:51.872 "data_offset": 2048, 00:09:51.872 "data_size": 63488 00:09:51.872 } 00:09:51.872 ] 00:09:51.872 }' 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.872 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.440 [2024-11-26 13:22:40.900773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.440 "name": "Existed_Raid", 00:09:52.440 "uuid": "a167e813-a866-4ce0-89a3-e22539a6be34", 00:09:52.440 "strip_size_kb": 64, 00:09:52.440 "state": "configuring", 00:09:52.440 "raid_level": "raid0", 00:09:52.440 "superblock": true, 00:09:52.440 "num_base_bdevs": 4, 00:09:52.440 "num_base_bdevs_discovered": 3, 00:09:52.440 "num_base_bdevs_operational": 4, 00:09:52.440 "base_bdevs_list": [ 00:09:52.440 { 00:09:52.440 "name": "BaseBdev1", 00:09:52.440 "uuid": "fc297064-631e-49db-bd45-4c55f5e37290", 00:09:52.440 "is_configured": true, 00:09:52.440 "data_offset": 2048, 00:09:52.440 "data_size": 63488 00:09:52.440 }, 00:09:52.440 { 00:09:52.440 "name": null, 00:09:52.440 "uuid": "95609c0f-2601-446f-a3e8-fec82680215b", 00:09:52.440 "is_configured": false, 00:09:52.440 "data_offset": 0, 00:09:52.440 "data_size": 63488 00:09:52.440 }, 00:09:52.440 { 00:09:52.440 "name": "BaseBdev3", 00:09:52.440 "uuid": "b3f3a701-5553-4672-8a13-9c8547822681", 00:09:52.440 "is_configured": true, 00:09:52.440 "data_offset": 2048, 00:09:52.440 "data_size": 63488 00:09:52.440 }, 00:09:52.440 { 00:09:52.440 "name": "BaseBdev4", 00:09:52.440 "uuid": "242d9d9f-77f4-42cb-baee-e3c71a865910", 00:09:52.440 "is_configured": true, 00:09:52.440 "data_offset": 2048, 00:09:52.440 "data_size": 63488 00:09:52.440 } 00:09:52.440 ] 00:09:52.440 }' 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.440 13:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.008 [2024-11-26 13:22:41.492945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.008 13:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.267 13:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.267 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.267 "name": "Existed_Raid", 00:09:53.267 "uuid": "a167e813-a866-4ce0-89a3-e22539a6be34", 00:09:53.267 "strip_size_kb": 64, 00:09:53.267 "state": "configuring", 00:09:53.267 "raid_level": "raid0", 00:09:53.267 "superblock": true, 00:09:53.267 "num_base_bdevs": 4, 00:09:53.267 "num_base_bdevs_discovered": 2, 00:09:53.267 "num_base_bdevs_operational": 4, 00:09:53.267 "base_bdevs_list": [ 00:09:53.267 { 00:09:53.267 "name": null, 00:09:53.267 "uuid": "fc297064-631e-49db-bd45-4c55f5e37290", 00:09:53.267 "is_configured": false, 00:09:53.267 "data_offset": 0, 00:09:53.267 "data_size": 63488 00:09:53.267 }, 00:09:53.267 { 00:09:53.267 "name": null, 00:09:53.267 "uuid": "95609c0f-2601-446f-a3e8-fec82680215b", 00:09:53.267 "is_configured": false, 00:09:53.267 "data_offset": 0, 00:09:53.267 "data_size": 63488 00:09:53.267 }, 00:09:53.267 { 00:09:53.267 "name": "BaseBdev3", 00:09:53.267 "uuid": "b3f3a701-5553-4672-8a13-9c8547822681", 00:09:53.267 "is_configured": true, 00:09:53.267 "data_offset": 2048, 00:09:53.267 "data_size": 63488 00:09:53.267 }, 00:09:53.267 { 00:09:53.267 "name": "BaseBdev4", 00:09:53.267 "uuid": "242d9d9f-77f4-42cb-baee-e3c71a865910", 00:09:53.267 "is_configured": true, 00:09:53.267 "data_offset": 2048, 00:09:53.268 "data_size": 63488 00:09:53.268 } 00:09:53.268 ] 00:09:53.268 }' 00:09:53.268 13:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.268 13:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.526 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.526 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:53.526 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.526 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.526 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.785 [2024-11-26 13:22:42.128930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.785 "name": "Existed_Raid", 00:09:53.785 "uuid": "a167e813-a866-4ce0-89a3-e22539a6be34", 00:09:53.785 "strip_size_kb": 64, 00:09:53.785 "state": "configuring", 00:09:53.785 "raid_level": "raid0", 00:09:53.785 "superblock": true, 00:09:53.785 "num_base_bdevs": 4, 00:09:53.785 "num_base_bdevs_discovered": 3, 00:09:53.785 "num_base_bdevs_operational": 4, 00:09:53.785 "base_bdevs_list": [ 00:09:53.785 { 00:09:53.785 "name": null, 00:09:53.785 "uuid": "fc297064-631e-49db-bd45-4c55f5e37290", 00:09:53.785 "is_configured": false, 00:09:53.785 "data_offset": 0, 00:09:53.785 "data_size": 63488 00:09:53.785 }, 00:09:53.785 { 00:09:53.785 "name": "BaseBdev2", 00:09:53.785 "uuid": "95609c0f-2601-446f-a3e8-fec82680215b", 00:09:53.785 "is_configured": true, 00:09:53.785 "data_offset": 2048, 00:09:53.785 "data_size": 63488 00:09:53.785 }, 00:09:53.785 { 00:09:53.785 "name": "BaseBdev3", 00:09:53.785 "uuid": "b3f3a701-5553-4672-8a13-9c8547822681", 00:09:53.785 "is_configured": true, 00:09:53.785 "data_offset": 2048, 00:09:53.785 "data_size": 63488 00:09:53.785 }, 00:09:53.785 { 00:09:53.785 "name": "BaseBdev4", 00:09:53.785 "uuid": "242d9d9f-77f4-42cb-baee-e3c71a865910", 00:09:53.785 "is_configured": true, 00:09:53.785 "data_offset": 2048, 00:09:53.785 "data_size": 63488 00:09:53.785 } 00:09:53.785 ] 00:09:53.785 }' 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.785 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fc297064-631e-49db-bd45-4c55f5e37290 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.353 [2024-11-26 13:22:42.800492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:54.353 [2024-11-26 13:22:42.800723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:54.353 [2024-11-26 13:22:42.800739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:54.353 NewBaseBdev 00:09:54.353 [2024-11-26 13:22:42.801023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:54.353 [2024-11-26 13:22:42.801179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:54.353 [2024-11-26 13:22:42.801231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:54.353 [2024-11-26 13:22:42.801392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.353 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.353 [ 00:09:54.353 { 00:09:54.353 "name": "NewBaseBdev", 00:09:54.353 "aliases": [ 00:09:54.353 "fc297064-631e-49db-bd45-4c55f5e37290" 00:09:54.353 ], 00:09:54.353 "product_name": "Malloc disk", 00:09:54.353 "block_size": 512, 00:09:54.353 "num_blocks": 65536, 00:09:54.353 "uuid": "fc297064-631e-49db-bd45-4c55f5e37290", 00:09:54.353 "assigned_rate_limits": { 00:09:54.353 "rw_ios_per_sec": 0, 00:09:54.353 "rw_mbytes_per_sec": 0, 00:09:54.353 "r_mbytes_per_sec": 0, 00:09:54.354 "w_mbytes_per_sec": 0 00:09:54.354 }, 00:09:54.354 "claimed": true, 00:09:54.354 "claim_type": "exclusive_write", 00:09:54.354 "zoned": false, 00:09:54.354 "supported_io_types": { 00:09:54.354 "read": true, 00:09:54.354 "write": true, 00:09:54.354 "unmap": true, 00:09:54.354 "flush": true, 00:09:54.354 "reset": true, 00:09:54.354 "nvme_admin": false, 00:09:54.354 "nvme_io": false, 00:09:54.354 "nvme_io_md": false, 00:09:54.354 "write_zeroes": true, 00:09:54.354 "zcopy": true, 00:09:54.354 "get_zone_info": false, 00:09:54.354 "zone_management": false, 00:09:54.354 "zone_append": false, 00:09:54.354 "compare": false, 00:09:54.354 "compare_and_write": false, 00:09:54.354 "abort": true, 00:09:54.354 "seek_hole": false, 00:09:54.354 "seek_data": false, 00:09:54.354 "copy": true, 00:09:54.354 "nvme_iov_md": false 00:09:54.354 }, 00:09:54.354 "memory_domains": [ 00:09:54.354 { 00:09:54.354 "dma_device_id": "system", 00:09:54.354 "dma_device_type": 1 00:09:54.354 }, 00:09:54.354 { 00:09:54.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.354 "dma_device_type": 2 00:09:54.354 } 00:09:54.354 ], 00:09:54.354 "driver_specific": {} 00:09:54.354 } 00:09:54.354 ] 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.354 "name": "Existed_Raid", 00:09:54.354 "uuid": "a167e813-a866-4ce0-89a3-e22539a6be34", 00:09:54.354 "strip_size_kb": 64, 00:09:54.354 "state": "online", 00:09:54.354 "raid_level": "raid0", 00:09:54.354 "superblock": true, 00:09:54.354 "num_base_bdevs": 4, 00:09:54.354 "num_base_bdevs_discovered": 4, 00:09:54.354 "num_base_bdevs_operational": 4, 00:09:54.354 "base_bdevs_list": [ 00:09:54.354 { 00:09:54.354 "name": "NewBaseBdev", 00:09:54.354 "uuid": "fc297064-631e-49db-bd45-4c55f5e37290", 00:09:54.354 "is_configured": true, 00:09:54.354 "data_offset": 2048, 00:09:54.354 "data_size": 63488 00:09:54.354 }, 00:09:54.354 { 00:09:54.354 "name": "BaseBdev2", 00:09:54.354 "uuid": "95609c0f-2601-446f-a3e8-fec82680215b", 00:09:54.354 "is_configured": true, 00:09:54.354 "data_offset": 2048, 00:09:54.354 "data_size": 63488 00:09:54.354 }, 00:09:54.354 { 00:09:54.354 "name": "BaseBdev3", 00:09:54.354 "uuid": "b3f3a701-5553-4672-8a13-9c8547822681", 00:09:54.354 "is_configured": true, 00:09:54.354 "data_offset": 2048, 00:09:54.354 "data_size": 63488 00:09:54.354 }, 00:09:54.354 { 00:09:54.354 "name": "BaseBdev4", 00:09:54.354 "uuid": "242d9d9f-77f4-42cb-baee-e3c71a865910", 00:09:54.354 "is_configured": true, 00:09:54.354 "data_offset": 2048, 00:09:54.354 "data_size": 63488 00:09:54.354 } 00:09:54.354 ] 00:09:54.354 }' 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.354 13:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.922 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:54.922 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:54.922 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:54.922 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:54.922 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:54.922 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:54.922 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:54.922 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:54.922 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.922 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.922 [2024-11-26 13:22:43.365019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.922 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.922 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:54.922 "name": "Existed_Raid", 00:09:54.922 "aliases": [ 00:09:54.922 "a167e813-a866-4ce0-89a3-e22539a6be34" 00:09:54.922 ], 00:09:54.922 "product_name": "Raid Volume", 00:09:54.922 "block_size": 512, 00:09:54.922 "num_blocks": 253952, 00:09:54.922 "uuid": "a167e813-a866-4ce0-89a3-e22539a6be34", 00:09:54.922 "assigned_rate_limits": { 00:09:54.922 "rw_ios_per_sec": 0, 00:09:54.922 "rw_mbytes_per_sec": 0, 00:09:54.922 "r_mbytes_per_sec": 0, 00:09:54.922 "w_mbytes_per_sec": 0 00:09:54.922 }, 00:09:54.922 "claimed": false, 00:09:54.922 "zoned": false, 00:09:54.922 "supported_io_types": { 00:09:54.922 "read": true, 00:09:54.922 "write": true, 00:09:54.922 "unmap": true, 00:09:54.922 "flush": true, 00:09:54.922 "reset": true, 00:09:54.922 "nvme_admin": false, 00:09:54.922 "nvme_io": false, 00:09:54.922 "nvme_io_md": false, 00:09:54.922 "write_zeroes": true, 00:09:54.922 "zcopy": false, 00:09:54.922 "get_zone_info": false, 00:09:54.922 "zone_management": false, 00:09:54.922 "zone_append": false, 00:09:54.922 "compare": false, 00:09:54.922 "compare_and_write": false, 00:09:54.923 "abort": false, 00:09:54.923 "seek_hole": false, 00:09:54.923 "seek_data": false, 00:09:54.923 "copy": false, 00:09:54.923 "nvme_iov_md": false 00:09:54.923 }, 00:09:54.923 "memory_domains": [ 00:09:54.923 { 00:09:54.923 "dma_device_id": "system", 00:09:54.923 "dma_device_type": 1 00:09:54.923 }, 00:09:54.923 { 00:09:54.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.923 "dma_device_type": 2 00:09:54.923 }, 00:09:54.923 { 00:09:54.923 "dma_device_id": "system", 00:09:54.923 "dma_device_type": 1 00:09:54.923 }, 00:09:54.923 { 00:09:54.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.923 "dma_device_type": 2 00:09:54.923 }, 00:09:54.923 { 00:09:54.923 "dma_device_id": "system", 00:09:54.923 "dma_device_type": 1 00:09:54.923 }, 00:09:54.923 { 00:09:54.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.923 "dma_device_type": 2 00:09:54.923 }, 00:09:54.923 { 00:09:54.923 "dma_device_id": "system", 00:09:54.923 "dma_device_type": 1 00:09:54.923 }, 00:09:54.923 { 00:09:54.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.923 "dma_device_type": 2 00:09:54.923 } 00:09:54.923 ], 00:09:54.923 "driver_specific": { 00:09:54.923 "raid": { 00:09:54.923 "uuid": "a167e813-a866-4ce0-89a3-e22539a6be34", 00:09:54.923 "strip_size_kb": 64, 00:09:54.923 "state": "online", 00:09:54.923 "raid_level": "raid0", 00:09:54.923 "superblock": true, 00:09:54.923 "num_base_bdevs": 4, 00:09:54.923 "num_base_bdevs_discovered": 4, 00:09:54.923 "num_base_bdevs_operational": 4, 00:09:54.923 "base_bdevs_list": [ 00:09:54.923 { 00:09:54.923 "name": "NewBaseBdev", 00:09:54.923 "uuid": "fc297064-631e-49db-bd45-4c55f5e37290", 00:09:54.923 "is_configured": true, 00:09:54.923 "data_offset": 2048, 00:09:54.923 "data_size": 63488 00:09:54.923 }, 00:09:54.923 { 00:09:54.923 "name": "BaseBdev2", 00:09:54.923 "uuid": "95609c0f-2601-446f-a3e8-fec82680215b", 00:09:54.923 "is_configured": true, 00:09:54.923 "data_offset": 2048, 00:09:54.923 "data_size": 63488 00:09:54.923 }, 00:09:54.923 { 00:09:54.923 "name": "BaseBdev3", 00:09:54.923 "uuid": "b3f3a701-5553-4672-8a13-9c8547822681", 00:09:54.923 "is_configured": true, 00:09:54.923 "data_offset": 2048, 00:09:54.923 "data_size": 63488 00:09:54.923 }, 00:09:54.923 { 00:09:54.923 "name": "BaseBdev4", 00:09:54.923 "uuid": "242d9d9f-77f4-42cb-baee-e3c71a865910", 00:09:54.923 "is_configured": true, 00:09:54.923 "data_offset": 2048, 00:09:54.923 "data_size": 63488 00:09:54.923 } 00:09:54.923 ] 00:09:54.923 } 00:09:54.923 } 00:09:54.923 }' 00:09:54.923 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:54.923 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:54.923 BaseBdev2 00:09:54.923 BaseBdev3 00:09:54.923 BaseBdev4' 00:09:54.923 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.182 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.183 [2024-11-26 13:22:43.740762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.183 [2024-11-26 13:22:43.740790] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.183 [2024-11-26 13:22:43.740855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.183 [2024-11-26 13:22:43.740925] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.183 [2024-11-26 13:22:43.740939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69591 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69591 ']' 00:09:55.183 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69591 00:09:55.441 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:55.441 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.442 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69591 00:09:55.442 killing process with pid 69591 00:09:55.442 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.442 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.442 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69591' 00:09:55.442 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69591 00:09:55.442 [2024-11-26 13:22:43.778960] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.442 13:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69591 00:09:55.700 [2024-11-26 13:22:44.049476] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.639 13:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:56.639 00:09:56.639 real 0m12.426s 00:09:56.639 user 0m21.021s 00:09:56.639 sys 0m1.714s 00:09:56.639 13:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.639 ************************************ 00:09:56.639 END TEST raid_state_function_test_sb 00:09:56.639 ************************************ 00:09:56.639 13:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.639 13:22:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:56.639 13:22:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:56.639 13:22:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.639 13:22:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.639 ************************************ 00:09:56.639 START TEST raid_superblock_test 00:09:56.639 ************************************ 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70271 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70271 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70271 ']' 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.639 13:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.639 [2024-11-26 13:22:45.069102] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:09:56.639 [2024-11-26 13:22:45.069590] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70271 ] 00:09:56.898 [2024-11-26 13:22:45.251309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.898 [2024-11-26 13:22:45.349446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.157 [2024-11-26 13:22:45.519054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.157 [2024-11-26 13:22:45.519117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.725 malloc1 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.725 [2024-11-26 13:22:46.066117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:57.725 [2024-11-26 13:22:46.066209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.725 [2024-11-26 13:22:46.066241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:57.725 [2024-11-26 13:22:46.066276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.725 [2024-11-26 13:22:46.068702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.725 [2024-11-26 13:22:46.068743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:57.725 pt1 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.725 malloc2 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.725 [2024-11-26 13:22:46.112172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:57.725 [2024-11-26 13:22:46.112263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.725 [2024-11-26 13:22:46.112294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:57.725 [2024-11-26 13:22:46.112308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.725 [2024-11-26 13:22:46.114668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.725 [2024-11-26 13:22:46.114709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:57.725 pt2 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:57.725 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.726 malloc3 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.726 [2024-11-26 13:22:46.165929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:57.726 [2024-11-26 13:22:46.165998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.726 [2024-11-26 13:22:46.166027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:57.726 [2024-11-26 13:22:46.166041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.726 [2024-11-26 13:22:46.168408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.726 [2024-11-26 13:22:46.168449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:57.726 pt3 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.726 malloc4 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.726 [2024-11-26 13:22:46.216264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:57.726 [2024-11-26 13:22:46.216495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.726 [2024-11-26 13:22:46.216563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:57.726 [2024-11-26 13:22:46.216716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.726 [2024-11-26 13:22:46.219084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.726 [2024-11-26 13:22:46.219260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:57.726 pt4 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.726 [2024-11-26 13:22:46.228306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:57.726 [2024-11-26 13:22:46.230360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:57.726 [2024-11-26 13:22:46.230591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:57.726 [2024-11-26 13:22:46.230717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:57.726 [2024-11-26 13:22:46.230929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:57.726 [2024-11-26 13:22:46.230945] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:57.726 [2024-11-26 13:22:46.231216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:57.726 [2024-11-26 13:22:46.231497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:57.726 [2024-11-26 13:22:46.231518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:57.726 [2024-11-26 13:22:46.231702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.726 "name": "raid_bdev1", 00:09:57.726 "uuid": "e8d33164-c33b-47c4-88d2-dcbcd24ab119", 00:09:57.726 "strip_size_kb": 64, 00:09:57.726 "state": "online", 00:09:57.726 "raid_level": "raid0", 00:09:57.726 "superblock": true, 00:09:57.726 "num_base_bdevs": 4, 00:09:57.726 "num_base_bdevs_discovered": 4, 00:09:57.726 "num_base_bdevs_operational": 4, 00:09:57.726 "base_bdevs_list": [ 00:09:57.726 { 00:09:57.726 "name": "pt1", 00:09:57.726 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:57.726 "is_configured": true, 00:09:57.726 "data_offset": 2048, 00:09:57.726 "data_size": 63488 00:09:57.726 }, 00:09:57.726 { 00:09:57.726 "name": "pt2", 00:09:57.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.726 "is_configured": true, 00:09:57.726 "data_offset": 2048, 00:09:57.726 "data_size": 63488 00:09:57.726 }, 00:09:57.726 { 00:09:57.726 "name": "pt3", 00:09:57.726 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.726 "is_configured": true, 00:09:57.726 "data_offset": 2048, 00:09:57.726 "data_size": 63488 00:09:57.726 }, 00:09:57.726 { 00:09:57.726 "name": "pt4", 00:09:57.726 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:57.726 "is_configured": true, 00:09:57.726 "data_offset": 2048, 00:09:57.726 "data_size": 63488 00:09:57.726 } 00:09:57.726 ] 00:09:57.726 }' 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.726 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.294 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:58.294 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:58.294 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:58.294 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:58.294 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:58.294 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:58.294 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:58.294 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.294 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.294 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:58.294 [2024-11-26 13:22:46.748715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.294 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.294 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:58.294 "name": "raid_bdev1", 00:09:58.294 "aliases": [ 00:09:58.294 "e8d33164-c33b-47c4-88d2-dcbcd24ab119" 00:09:58.294 ], 00:09:58.294 "product_name": "Raid Volume", 00:09:58.294 "block_size": 512, 00:09:58.294 "num_blocks": 253952, 00:09:58.294 "uuid": "e8d33164-c33b-47c4-88d2-dcbcd24ab119", 00:09:58.294 "assigned_rate_limits": { 00:09:58.294 "rw_ios_per_sec": 0, 00:09:58.294 "rw_mbytes_per_sec": 0, 00:09:58.294 "r_mbytes_per_sec": 0, 00:09:58.294 "w_mbytes_per_sec": 0 00:09:58.294 }, 00:09:58.294 "claimed": false, 00:09:58.294 "zoned": false, 00:09:58.294 "supported_io_types": { 00:09:58.294 "read": true, 00:09:58.294 "write": true, 00:09:58.294 "unmap": true, 00:09:58.294 "flush": true, 00:09:58.294 "reset": true, 00:09:58.294 "nvme_admin": false, 00:09:58.294 "nvme_io": false, 00:09:58.294 "nvme_io_md": false, 00:09:58.294 "write_zeroes": true, 00:09:58.294 "zcopy": false, 00:09:58.294 "get_zone_info": false, 00:09:58.294 "zone_management": false, 00:09:58.294 "zone_append": false, 00:09:58.294 "compare": false, 00:09:58.294 "compare_and_write": false, 00:09:58.294 "abort": false, 00:09:58.294 "seek_hole": false, 00:09:58.294 "seek_data": false, 00:09:58.294 "copy": false, 00:09:58.294 "nvme_iov_md": false 00:09:58.294 }, 00:09:58.294 "memory_domains": [ 00:09:58.294 { 00:09:58.294 "dma_device_id": "system", 00:09:58.294 "dma_device_type": 1 00:09:58.294 }, 00:09:58.294 { 00:09:58.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.294 "dma_device_type": 2 00:09:58.294 }, 00:09:58.294 { 00:09:58.294 "dma_device_id": "system", 00:09:58.294 "dma_device_type": 1 00:09:58.294 }, 00:09:58.294 { 00:09:58.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.294 "dma_device_type": 2 00:09:58.294 }, 00:09:58.294 { 00:09:58.294 "dma_device_id": "system", 00:09:58.294 "dma_device_type": 1 00:09:58.294 }, 00:09:58.294 { 00:09:58.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.294 "dma_device_type": 2 00:09:58.294 }, 00:09:58.294 { 00:09:58.294 "dma_device_id": "system", 00:09:58.294 "dma_device_type": 1 00:09:58.294 }, 00:09:58.294 { 00:09:58.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.294 "dma_device_type": 2 00:09:58.294 } 00:09:58.294 ], 00:09:58.294 "driver_specific": { 00:09:58.294 "raid": { 00:09:58.294 "uuid": "e8d33164-c33b-47c4-88d2-dcbcd24ab119", 00:09:58.294 "strip_size_kb": 64, 00:09:58.294 "state": "online", 00:09:58.294 "raid_level": "raid0", 00:09:58.294 "superblock": true, 00:09:58.294 "num_base_bdevs": 4, 00:09:58.294 "num_base_bdevs_discovered": 4, 00:09:58.294 "num_base_bdevs_operational": 4, 00:09:58.294 "base_bdevs_list": [ 00:09:58.294 { 00:09:58.294 "name": "pt1", 00:09:58.294 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:58.294 "is_configured": true, 00:09:58.294 "data_offset": 2048, 00:09:58.294 "data_size": 63488 00:09:58.294 }, 00:09:58.294 { 00:09:58.294 "name": "pt2", 00:09:58.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.295 "is_configured": true, 00:09:58.295 "data_offset": 2048, 00:09:58.295 "data_size": 63488 00:09:58.295 }, 00:09:58.295 { 00:09:58.295 "name": "pt3", 00:09:58.295 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.295 "is_configured": true, 00:09:58.295 "data_offset": 2048, 00:09:58.295 "data_size": 63488 00:09:58.295 }, 00:09:58.295 { 00:09:58.295 "name": "pt4", 00:09:58.295 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:58.295 "is_configured": true, 00:09:58.295 "data_offset": 2048, 00:09:58.295 "data_size": 63488 00:09:58.295 } 00:09:58.295 ] 00:09:58.295 } 00:09:58.295 } 00:09:58.295 }' 00:09:58.295 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:58.295 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:58.295 pt2 00:09:58.295 pt3 00:09:58.295 pt4' 00:09:58.295 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.564 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:58.564 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.564 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:58.564 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.564 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.564 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.564 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.564 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.564 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.564 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.564 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:58.564 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.564 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.564 13:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.564 13:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.564 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.864 [2024-11-26 13:22:47.136809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e8d33164-c33b-47c4-88d2-dcbcd24ab119 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e8d33164-c33b-47c4-88d2-dcbcd24ab119 ']' 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.864 [2024-11-26 13:22:47.180503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.864 [2024-11-26 13:22:47.180528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.864 [2024-11-26 13:22:47.180593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.864 [2024-11-26 13:22:47.180660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.864 [2024-11-26 13:22:47.180679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.864 [2024-11-26 13:22:47.332553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:58.864 [2024-11-26 13:22:47.334706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:58.864 [2024-11-26 13:22:47.334784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:58.864 [2024-11-26 13:22:47.334849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:58.864 [2024-11-26 13:22:47.334913] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:58.864 [2024-11-26 13:22:47.334985] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:58.864 [2024-11-26 13:22:47.335015] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:58.864 [2024-11-26 13:22:47.335042] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:58.864 [2024-11-26 13:22:47.335061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.864 [2024-11-26 13:22:47.335076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:58.864 request: 00:09:58.864 { 00:09:58.864 "name": "raid_bdev1", 00:09:58.864 "raid_level": "raid0", 00:09:58.864 "base_bdevs": [ 00:09:58.864 "malloc1", 00:09:58.864 "malloc2", 00:09:58.864 "malloc3", 00:09:58.864 "malloc4" 00:09:58.864 ], 00:09:58.864 "strip_size_kb": 64, 00:09:58.864 "superblock": false, 00:09:58.864 "method": "bdev_raid_create", 00:09:58.864 "req_id": 1 00:09:58.864 } 00:09:58.864 Got JSON-RPC error response 00:09:58.864 response: 00:09:58.864 { 00:09:58.864 "code": -17, 00:09:58.864 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:58.864 } 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.864 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.864 [2024-11-26 13:22:47.400551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:58.864 [2024-11-26 13:22:47.400751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.864 [2024-11-26 13:22:47.400827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:58.865 [2024-11-26 13:22:47.400933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.865 [2024-11-26 13:22:47.403446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.865 [2024-11-26 13:22:47.403662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:58.865 [2024-11-26 13:22:47.403837] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:58.865 [2024-11-26 13:22:47.404001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:58.865 pt1 00:09:58.865 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.865 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:58.865 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.865 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.865 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.865 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.865 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.865 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.865 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.865 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.865 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.865 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.865 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.865 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.865 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.152 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.152 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.152 "name": "raid_bdev1", 00:09:59.152 "uuid": "e8d33164-c33b-47c4-88d2-dcbcd24ab119", 00:09:59.152 "strip_size_kb": 64, 00:09:59.152 "state": "configuring", 00:09:59.152 "raid_level": "raid0", 00:09:59.152 "superblock": true, 00:09:59.152 "num_base_bdevs": 4, 00:09:59.152 "num_base_bdevs_discovered": 1, 00:09:59.152 "num_base_bdevs_operational": 4, 00:09:59.152 "base_bdevs_list": [ 00:09:59.152 { 00:09:59.152 "name": "pt1", 00:09:59.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.152 "is_configured": true, 00:09:59.152 "data_offset": 2048, 00:09:59.152 "data_size": 63488 00:09:59.152 }, 00:09:59.152 { 00:09:59.152 "name": null, 00:09:59.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.152 "is_configured": false, 00:09:59.152 "data_offset": 2048, 00:09:59.152 "data_size": 63488 00:09:59.152 }, 00:09:59.152 { 00:09:59.152 "name": null, 00:09:59.152 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:59.152 "is_configured": false, 00:09:59.152 "data_offset": 2048, 00:09:59.152 "data_size": 63488 00:09:59.152 }, 00:09:59.152 { 00:09:59.152 "name": null, 00:09:59.152 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:59.152 "is_configured": false, 00:09:59.152 "data_offset": 2048, 00:09:59.152 "data_size": 63488 00:09:59.152 } 00:09:59.152 ] 00:09:59.152 }' 00:09:59.152 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.152 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.423 [2024-11-26 13:22:47.932699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:59.423 [2024-11-26 13:22:47.932774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.423 [2024-11-26 13:22:47.932795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:59.423 [2024-11-26 13:22:47.932809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.423 [2024-11-26 13:22:47.933187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.423 [2024-11-26 13:22:47.933221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:59.423 [2024-11-26 13:22:47.933320] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:59.423 [2024-11-26 13:22:47.933351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:59.423 pt2 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.423 [2024-11-26 13:22:47.940693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.423 13:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.682 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.682 "name": "raid_bdev1", 00:09:59.682 "uuid": "e8d33164-c33b-47c4-88d2-dcbcd24ab119", 00:09:59.682 "strip_size_kb": 64, 00:09:59.682 "state": "configuring", 00:09:59.682 "raid_level": "raid0", 00:09:59.682 "superblock": true, 00:09:59.682 "num_base_bdevs": 4, 00:09:59.682 "num_base_bdevs_discovered": 1, 00:09:59.682 "num_base_bdevs_operational": 4, 00:09:59.682 "base_bdevs_list": [ 00:09:59.682 { 00:09:59.682 "name": "pt1", 00:09:59.682 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.682 "is_configured": true, 00:09:59.682 "data_offset": 2048, 00:09:59.682 "data_size": 63488 00:09:59.682 }, 00:09:59.682 { 00:09:59.682 "name": null, 00:09:59.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.682 "is_configured": false, 00:09:59.682 "data_offset": 0, 00:09:59.682 "data_size": 63488 00:09:59.682 }, 00:09:59.682 { 00:09:59.682 "name": null, 00:09:59.682 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:59.682 "is_configured": false, 00:09:59.682 "data_offset": 2048, 00:09:59.682 "data_size": 63488 00:09:59.682 }, 00:09:59.682 { 00:09:59.682 "name": null, 00:09:59.682 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:59.682 "is_configured": false, 00:09:59.682 "data_offset": 2048, 00:09:59.682 "data_size": 63488 00:09:59.682 } 00:09:59.682 ] 00:09:59.682 }' 00:09:59.682 13:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.682 13:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.941 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:59.941 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.941 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:59.941 13:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.941 13:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.941 [2024-11-26 13:22:48.460799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:59.941 [2024-11-26 13:22:48.460851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.941 [2024-11-26 13:22:48.460874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:59.941 [2024-11-26 13:22:48.460887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.942 [2024-11-26 13:22:48.461295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.942 [2024-11-26 13:22:48.461319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:59.942 [2024-11-26 13:22:48.461390] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:59.942 [2024-11-26 13:22:48.461414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:59.942 pt2 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.942 [2024-11-26 13:22:48.472800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:59.942 [2024-11-26 13:22:48.472850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.942 [2024-11-26 13:22:48.472883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:59.942 [2024-11-26 13:22:48.472897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.942 [2024-11-26 13:22:48.473275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.942 [2024-11-26 13:22:48.473302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:59.942 [2024-11-26 13:22:48.473368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:59.942 [2024-11-26 13:22:48.473423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:59.942 pt3 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.942 [2024-11-26 13:22:48.480781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:59.942 [2024-11-26 13:22:48.480833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.942 [2024-11-26 13:22:48.480858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:59.942 [2024-11-26 13:22:48.480869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.942 [2024-11-26 13:22:48.481257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.942 [2024-11-26 13:22:48.481283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:59.942 [2024-11-26 13:22:48.481383] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:59.942 [2024-11-26 13:22:48.481408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:59.942 [2024-11-26 13:22:48.481550] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:59.942 [2024-11-26 13:22:48.481684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:59.942 [2024-11-26 13:22:48.481978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:59.942 [2024-11-26 13:22:48.482152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:59.942 [2024-11-26 13:22:48.482171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:59.942 [2024-11-26 13:22:48.482351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.942 pt4 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.942 13:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.201 13:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.201 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.201 "name": "raid_bdev1", 00:10:00.201 "uuid": "e8d33164-c33b-47c4-88d2-dcbcd24ab119", 00:10:00.201 "strip_size_kb": 64, 00:10:00.201 "state": "online", 00:10:00.201 "raid_level": "raid0", 00:10:00.201 "superblock": true, 00:10:00.201 "num_base_bdevs": 4, 00:10:00.201 "num_base_bdevs_discovered": 4, 00:10:00.201 "num_base_bdevs_operational": 4, 00:10:00.201 "base_bdevs_list": [ 00:10:00.201 { 00:10:00.201 "name": "pt1", 00:10:00.201 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.201 "is_configured": true, 00:10:00.201 "data_offset": 2048, 00:10:00.201 "data_size": 63488 00:10:00.201 }, 00:10:00.201 { 00:10:00.201 "name": "pt2", 00:10:00.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.201 "is_configured": true, 00:10:00.201 "data_offset": 2048, 00:10:00.201 "data_size": 63488 00:10:00.201 }, 00:10:00.201 { 00:10:00.201 "name": "pt3", 00:10:00.201 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.201 "is_configured": true, 00:10:00.201 "data_offset": 2048, 00:10:00.201 "data_size": 63488 00:10:00.201 }, 00:10:00.201 { 00:10:00.201 "name": "pt4", 00:10:00.201 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:00.201 "is_configured": true, 00:10:00.201 "data_offset": 2048, 00:10:00.201 "data_size": 63488 00:10:00.201 } 00:10:00.201 ] 00:10:00.201 }' 00:10:00.201 13:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.201 13:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.461 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:00.461 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:00.461 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:00.461 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:00.461 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:00.461 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:00.461 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:00.461 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:00.461 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.461 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.461 [2024-11-26 13:22:49.017181] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.720 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.720 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:00.720 "name": "raid_bdev1", 00:10:00.720 "aliases": [ 00:10:00.720 "e8d33164-c33b-47c4-88d2-dcbcd24ab119" 00:10:00.720 ], 00:10:00.720 "product_name": "Raid Volume", 00:10:00.720 "block_size": 512, 00:10:00.720 "num_blocks": 253952, 00:10:00.720 "uuid": "e8d33164-c33b-47c4-88d2-dcbcd24ab119", 00:10:00.720 "assigned_rate_limits": { 00:10:00.720 "rw_ios_per_sec": 0, 00:10:00.720 "rw_mbytes_per_sec": 0, 00:10:00.720 "r_mbytes_per_sec": 0, 00:10:00.720 "w_mbytes_per_sec": 0 00:10:00.720 }, 00:10:00.720 "claimed": false, 00:10:00.720 "zoned": false, 00:10:00.720 "supported_io_types": { 00:10:00.720 "read": true, 00:10:00.720 "write": true, 00:10:00.720 "unmap": true, 00:10:00.720 "flush": true, 00:10:00.720 "reset": true, 00:10:00.720 "nvme_admin": false, 00:10:00.720 "nvme_io": false, 00:10:00.720 "nvme_io_md": false, 00:10:00.720 "write_zeroes": true, 00:10:00.720 "zcopy": false, 00:10:00.720 "get_zone_info": false, 00:10:00.720 "zone_management": false, 00:10:00.720 "zone_append": false, 00:10:00.720 "compare": false, 00:10:00.720 "compare_and_write": false, 00:10:00.720 "abort": false, 00:10:00.720 "seek_hole": false, 00:10:00.720 "seek_data": false, 00:10:00.720 "copy": false, 00:10:00.720 "nvme_iov_md": false 00:10:00.720 }, 00:10:00.720 "memory_domains": [ 00:10:00.720 { 00:10:00.720 "dma_device_id": "system", 00:10:00.720 "dma_device_type": 1 00:10:00.720 }, 00:10:00.720 { 00:10:00.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.720 "dma_device_type": 2 00:10:00.720 }, 00:10:00.720 { 00:10:00.720 "dma_device_id": "system", 00:10:00.720 "dma_device_type": 1 00:10:00.720 }, 00:10:00.720 { 00:10:00.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.720 "dma_device_type": 2 00:10:00.720 }, 00:10:00.720 { 00:10:00.720 "dma_device_id": "system", 00:10:00.720 "dma_device_type": 1 00:10:00.720 }, 00:10:00.720 { 00:10:00.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.720 "dma_device_type": 2 00:10:00.720 }, 00:10:00.720 { 00:10:00.720 "dma_device_id": "system", 00:10:00.720 "dma_device_type": 1 00:10:00.720 }, 00:10:00.720 { 00:10:00.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.720 "dma_device_type": 2 00:10:00.720 } 00:10:00.720 ], 00:10:00.720 "driver_specific": { 00:10:00.721 "raid": { 00:10:00.721 "uuid": "e8d33164-c33b-47c4-88d2-dcbcd24ab119", 00:10:00.721 "strip_size_kb": 64, 00:10:00.721 "state": "online", 00:10:00.721 "raid_level": "raid0", 00:10:00.721 "superblock": true, 00:10:00.721 "num_base_bdevs": 4, 00:10:00.721 "num_base_bdevs_discovered": 4, 00:10:00.721 "num_base_bdevs_operational": 4, 00:10:00.721 "base_bdevs_list": [ 00:10:00.721 { 00:10:00.721 "name": "pt1", 00:10:00.721 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.721 "is_configured": true, 00:10:00.721 "data_offset": 2048, 00:10:00.721 "data_size": 63488 00:10:00.721 }, 00:10:00.721 { 00:10:00.721 "name": "pt2", 00:10:00.721 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.721 "is_configured": true, 00:10:00.721 "data_offset": 2048, 00:10:00.721 "data_size": 63488 00:10:00.721 }, 00:10:00.721 { 00:10:00.721 "name": "pt3", 00:10:00.721 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.721 "is_configured": true, 00:10:00.721 "data_offset": 2048, 00:10:00.721 "data_size": 63488 00:10:00.721 }, 00:10:00.721 { 00:10:00.721 "name": "pt4", 00:10:00.721 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:00.721 "is_configured": true, 00:10:00.721 "data_offset": 2048, 00:10:00.721 "data_size": 63488 00:10:00.721 } 00:10:00.721 ] 00:10:00.721 } 00:10:00.721 } 00:10:00.721 }' 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:00.721 pt2 00:10:00.721 pt3 00:10:00.721 pt4' 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.721 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:00.980 [2024-11-26 13:22:49.397231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e8d33164-c33b-47c4-88d2-dcbcd24ab119 '!=' e8d33164-c33b-47c4-88d2-dcbcd24ab119 ']' 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70271 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70271 ']' 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70271 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70271 00:10:00.980 killing process with pid 70271 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70271' 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70271 00:10:00.980 [2024-11-26 13:22:49.474570] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.980 [2024-11-26 13:22:49.474648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.980 13:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70271 00:10:00.980 [2024-11-26 13:22:49.474711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.980 [2024-11-26 13:22:49.474724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:01.239 [2024-11-26 13:22:49.741308] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.179 13:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:02.179 ************************************ 00:10:02.179 END TEST raid_superblock_test 00:10:02.179 ************************************ 00:10:02.179 00:10:02.179 real 0m5.639s 00:10:02.179 user 0m8.643s 00:10:02.179 sys 0m0.879s 00:10:02.179 13:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.179 13:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.179 13:22:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:02.179 13:22:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:02.179 13:22:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.179 13:22:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:02.179 ************************************ 00:10:02.179 START TEST raid_read_error_test 00:10:02.179 ************************************ 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UhgjtZUWAQ 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70537 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70537 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70537 ']' 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.179 13:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.438 [2024-11-26 13:22:50.774035] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:10:02.438 [2024-11-26 13:22:50.774253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70537 ] 00:10:02.438 [2024-11-26 13:22:50.957478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.697 [2024-11-26 13:22:51.055548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.697 [2024-11-26 13:22:51.224996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.697 [2024-11-26 13:22:51.225061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.265 BaseBdev1_malloc 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.265 true 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.265 [2024-11-26 13:22:51.791114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:03.265 [2024-11-26 13:22:51.791179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.265 [2024-11-26 13:22:51.791205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:03.265 [2024-11-26 13:22:51.791221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.265 [2024-11-26 13:22:51.793640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.265 [2024-11-26 13:22:51.793687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:03.265 BaseBdev1 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.265 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.525 BaseBdev2_malloc 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.525 true 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.525 [2024-11-26 13:22:51.841291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:03.525 [2024-11-26 13:22:51.841348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.525 [2024-11-26 13:22:51.841371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:03.525 [2024-11-26 13:22:51.841385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.525 [2024-11-26 13:22:51.843757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.525 [2024-11-26 13:22:51.843801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:03.525 BaseBdev2 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.525 BaseBdev3_malloc 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.525 true 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.525 [2024-11-26 13:22:51.903169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:03.525 [2024-11-26 13:22:51.903227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.525 [2024-11-26 13:22:51.903285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:03.525 [2024-11-26 13:22:51.903303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.525 [2024-11-26 13:22:51.905650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.525 [2024-11-26 13:22:51.905695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:03.525 BaseBdev3 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.525 BaseBdev4_malloc 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.525 true 00:10:03.525 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.526 [2024-11-26 13:22:51.952896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:03.526 [2024-11-26 13:22:51.952952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.526 [2024-11-26 13:22:51.952975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:03.526 [2024-11-26 13:22:51.952990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.526 [2024-11-26 13:22:51.955409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.526 [2024-11-26 13:22:51.955457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:03.526 BaseBdev4 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.526 [2024-11-26 13:22:51.960967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.526 [2024-11-26 13:22:51.963242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.526 [2024-11-26 13:22:51.963520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.526 [2024-11-26 13:22:51.963689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:03.526 [2024-11-26 13:22:51.963994] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:03.526 [2024-11-26 13:22:51.964064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:03.526 [2024-11-26 13:22:51.964483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:03.526 [2024-11-26 13:22:51.964825] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:03.526 [2024-11-26 13:22:51.964936] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:03.526 [2024-11-26 13:22:51.965299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.526 13:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.526 13:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.526 "name": "raid_bdev1", 00:10:03.526 "uuid": "9133d7c5-d972-4d2f-8040-15e812f17ca1", 00:10:03.526 "strip_size_kb": 64, 00:10:03.526 "state": "online", 00:10:03.526 "raid_level": "raid0", 00:10:03.526 "superblock": true, 00:10:03.526 "num_base_bdevs": 4, 00:10:03.526 "num_base_bdevs_discovered": 4, 00:10:03.526 "num_base_bdevs_operational": 4, 00:10:03.526 "base_bdevs_list": [ 00:10:03.526 { 00:10:03.526 "name": "BaseBdev1", 00:10:03.526 "uuid": "ded425cd-e9ad-518f-9644-878d3c67f79d", 00:10:03.526 "is_configured": true, 00:10:03.526 "data_offset": 2048, 00:10:03.526 "data_size": 63488 00:10:03.526 }, 00:10:03.526 { 00:10:03.526 "name": "BaseBdev2", 00:10:03.526 "uuid": "73cee3a8-7437-529e-a3d2-21a8a9b40413", 00:10:03.526 "is_configured": true, 00:10:03.526 "data_offset": 2048, 00:10:03.526 "data_size": 63488 00:10:03.526 }, 00:10:03.526 { 00:10:03.526 "name": "BaseBdev3", 00:10:03.526 "uuid": "f63a326d-fa1d-51fa-9062-a991d8b7ee3d", 00:10:03.526 "is_configured": true, 00:10:03.526 "data_offset": 2048, 00:10:03.526 "data_size": 63488 00:10:03.526 }, 00:10:03.526 { 00:10:03.526 "name": "BaseBdev4", 00:10:03.526 "uuid": "a3f0d822-f5ad-55d0-8931-65fd9f326862", 00:10:03.526 "is_configured": true, 00:10:03.526 "data_offset": 2048, 00:10:03.526 "data_size": 63488 00:10:03.526 } 00:10:03.526 ] 00:10:03.526 }' 00:10:03.526 13:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.526 13:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.094 13:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:04.095 13:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:04.095 [2024-11-26 13:22:52.602424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.031 "name": "raid_bdev1", 00:10:05.031 "uuid": "9133d7c5-d972-4d2f-8040-15e812f17ca1", 00:10:05.031 "strip_size_kb": 64, 00:10:05.031 "state": "online", 00:10:05.031 "raid_level": "raid0", 00:10:05.031 "superblock": true, 00:10:05.031 "num_base_bdevs": 4, 00:10:05.031 "num_base_bdevs_discovered": 4, 00:10:05.031 "num_base_bdevs_operational": 4, 00:10:05.031 "base_bdevs_list": [ 00:10:05.031 { 00:10:05.031 "name": "BaseBdev1", 00:10:05.031 "uuid": "ded425cd-e9ad-518f-9644-878d3c67f79d", 00:10:05.031 "is_configured": true, 00:10:05.031 "data_offset": 2048, 00:10:05.031 "data_size": 63488 00:10:05.031 }, 00:10:05.031 { 00:10:05.031 "name": "BaseBdev2", 00:10:05.031 "uuid": "73cee3a8-7437-529e-a3d2-21a8a9b40413", 00:10:05.031 "is_configured": true, 00:10:05.031 "data_offset": 2048, 00:10:05.031 "data_size": 63488 00:10:05.031 }, 00:10:05.031 { 00:10:05.031 "name": "BaseBdev3", 00:10:05.031 "uuid": "f63a326d-fa1d-51fa-9062-a991d8b7ee3d", 00:10:05.031 "is_configured": true, 00:10:05.031 "data_offset": 2048, 00:10:05.031 "data_size": 63488 00:10:05.031 }, 00:10:05.031 { 00:10:05.031 "name": "BaseBdev4", 00:10:05.031 "uuid": "a3f0d822-f5ad-55d0-8931-65fd9f326862", 00:10:05.031 "is_configured": true, 00:10:05.031 "data_offset": 2048, 00:10:05.031 "data_size": 63488 00:10:05.031 } 00:10:05.031 ] 00:10:05.031 }' 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.031 13:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.599 13:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:05.599 13:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.599 13:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.599 [2024-11-26 13:22:54.030334] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:05.599 [2024-11-26 13:22:54.030677] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.599 [2024-11-26 13:22:54.033517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.599 [2024-11-26 13:22:54.033717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.599 [2024-11-26 13:22:54.033814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.599 [2024-11-26 13:22:54.033959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:05.599 { 00:10:05.599 "results": [ 00:10:05.599 { 00:10:05.599 "job": "raid_bdev1", 00:10:05.599 "core_mask": "0x1", 00:10:05.599 "workload": "randrw", 00:10:05.599 "percentage": 50, 00:10:05.599 "status": "finished", 00:10:05.599 "queue_depth": 1, 00:10:05.599 "io_size": 131072, 00:10:05.599 "runtime": 1.426194, 00:10:05.599 "iops": 13086.578684246322, 00:10:05.599 "mibps": 1635.8223355307903, 00:10:05.599 "io_failed": 1, 00:10:05.599 "io_timeout": 0, 00:10:05.599 "avg_latency_us": 106.80932459878723, 00:10:05.599 "min_latency_us": 35.14181818181818, 00:10:05.599 "max_latency_us": 1414.9818181818182 00:10:05.599 } 00:10:05.599 ], 00:10:05.599 "core_count": 1 00:10:05.599 } 00:10:05.599 13:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.599 13:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70537 00:10:05.599 13:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70537 ']' 00:10:05.599 13:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70537 00:10:05.599 13:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:05.599 13:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.599 13:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70537 00:10:05.599 killing process with pid 70537 00:10:05.599 13:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.599 13:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.599 13:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70537' 00:10:05.599 13:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70537 00:10:05.599 [2024-11-26 13:22:54.066979] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:05.599 13:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70537 00:10:05.858 [2024-11-26 13:22:54.287493] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:06.795 13:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UhgjtZUWAQ 00:10:06.795 13:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:06.795 13:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:06.795 13:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:06.795 13:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:06.795 ************************************ 00:10:06.795 END TEST raid_read_error_test 00:10:06.795 ************************************ 00:10:06.795 13:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:06.795 13:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:06.795 13:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:06.795 00:10:06.795 real 0m4.527s 00:10:06.795 user 0m5.676s 00:10:06.795 sys 0m0.579s 00:10:06.795 13:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.795 13:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.795 13:22:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:06.795 13:22:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:06.795 13:22:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.795 13:22:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:06.795 ************************************ 00:10:06.795 START TEST raid_write_error_test 00:10:06.795 ************************************ 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EF1d1DFozq 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70677 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70677 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70677 ']' 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.795 13:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.795 [2024-11-26 13:22:55.356674] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:10:06.795 [2024-11-26 13:22:55.356859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70677 ] 00:10:07.054 [2024-11-26 13:22:55.534281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.312 [2024-11-26 13:22:55.632197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.312 [2024-11-26 13:22:55.801407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.312 [2024-11-26 13:22:55.801469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.881 BaseBdev1_malloc 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.881 true 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.881 [2024-11-26 13:22:56.317201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:07.881 [2024-11-26 13:22:56.317313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.881 [2024-11-26 13:22:56.317343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:07.881 [2024-11-26 13:22:56.317360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.881 [2024-11-26 13:22:56.319752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.881 [2024-11-26 13:22:56.319799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:07.881 BaseBdev1 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.881 BaseBdev2_malloc 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.881 true 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.881 [2024-11-26 13:22:56.366964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:07.881 [2024-11-26 13:22:56.367274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.881 [2024-11-26 13:22:56.367308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:07.881 [2024-11-26 13:22:56.367326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.881 [2024-11-26 13:22:56.369726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.881 [2024-11-26 13:22:56.369769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:07.881 BaseBdev2 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.881 BaseBdev3_malloc 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.881 true 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.881 [2024-11-26 13:22:56.424003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:07.881 [2024-11-26 13:22:56.424060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.881 [2024-11-26 13:22:56.424084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:07.881 [2024-11-26 13:22:56.424099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.881 [2024-11-26 13:22:56.426474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.881 [2024-11-26 13:22:56.426521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:07.881 BaseBdev3 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.881 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.140 BaseBdev4_malloc 00:10:08.140 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.140 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:08.140 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.140 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.140 true 00:10:08.140 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.140 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:08.140 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.140 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.140 [2024-11-26 13:22:56.477692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:08.140 [2024-11-26 13:22:56.477763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.140 [2024-11-26 13:22:56.477787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:08.140 [2024-11-26 13:22:56.477803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.140 [2024-11-26 13:22:56.480311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.140 [2024-11-26 13:22:56.480360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:08.140 BaseBdev4 00:10:08.140 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.140 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:08.140 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.140 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.140 [2024-11-26 13:22:56.485749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.140 [2024-11-26 13:22:56.488056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.140 [2024-11-26 13:22:56.488157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.141 [2024-11-26 13:22:56.488246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:08.141 [2024-11-26 13:22:56.488593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:08.141 [2024-11-26 13:22:56.488619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:08.141 [2024-11-26 13:22:56.488926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:08.141 [2024-11-26 13:22:56.489127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:08.141 [2024-11-26 13:22:56.489146] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:08.141 [2024-11-26 13:22:56.489378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.141 "name": "raid_bdev1", 00:10:08.141 "uuid": "0b7a0e7c-c17b-48b0-bf87-133ab3020851", 00:10:08.141 "strip_size_kb": 64, 00:10:08.141 "state": "online", 00:10:08.141 "raid_level": "raid0", 00:10:08.141 "superblock": true, 00:10:08.141 "num_base_bdevs": 4, 00:10:08.141 "num_base_bdevs_discovered": 4, 00:10:08.141 "num_base_bdevs_operational": 4, 00:10:08.141 "base_bdevs_list": [ 00:10:08.141 { 00:10:08.141 "name": "BaseBdev1", 00:10:08.141 "uuid": "260222a1-d3d0-5a9d-b02b-494b13c7abbd", 00:10:08.141 "is_configured": true, 00:10:08.141 "data_offset": 2048, 00:10:08.141 "data_size": 63488 00:10:08.141 }, 00:10:08.141 { 00:10:08.141 "name": "BaseBdev2", 00:10:08.141 "uuid": "0956c0a7-cd16-5dbd-acf0-7f23ba02747c", 00:10:08.141 "is_configured": true, 00:10:08.141 "data_offset": 2048, 00:10:08.141 "data_size": 63488 00:10:08.141 }, 00:10:08.141 { 00:10:08.141 "name": "BaseBdev3", 00:10:08.141 "uuid": "5ac814ce-d1d1-552f-b7cc-b15c11748bbe", 00:10:08.141 "is_configured": true, 00:10:08.141 "data_offset": 2048, 00:10:08.141 "data_size": 63488 00:10:08.141 }, 00:10:08.141 { 00:10:08.141 "name": "BaseBdev4", 00:10:08.141 "uuid": "84d8ee34-65ee-5ac2-a866-d9933d639d4e", 00:10:08.141 "is_configured": true, 00:10:08.141 "data_offset": 2048, 00:10:08.141 "data_size": 63488 00:10:08.141 } 00:10:08.141 ] 00:10:08.141 }' 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.141 13:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.708 13:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:08.708 13:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:08.708 [2024-11-26 13:22:57.126947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.645 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.646 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.646 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.646 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.646 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.646 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.646 "name": "raid_bdev1", 00:10:09.646 "uuid": "0b7a0e7c-c17b-48b0-bf87-133ab3020851", 00:10:09.646 "strip_size_kb": 64, 00:10:09.646 "state": "online", 00:10:09.646 "raid_level": "raid0", 00:10:09.646 "superblock": true, 00:10:09.646 "num_base_bdevs": 4, 00:10:09.646 "num_base_bdevs_discovered": 4, 00:10:09.646 "num_base_bdevs_operational": 4, 00:10:09.646 "base_bdevs_list": [ 00:10:09.646 { 00:10:09.646 "name": "BaseBdev1", 00:10:09.646 "uuid": "260222a1-d3d0-5a9d-b02b-494b13c7abbd", 00:10:09.646 "is_configured": true, 00:10:09.646 "data_offset": 2048, 00:10:09.646 "data_size": 63488 00:10:09.646 }, 00:10:09.646 { 00:10:09.646 "name": "BaseBdev2", 00:10:09.646 "uuid": "0956c0a7-cd16-5dbd-acf0-7f23ba02747c", 00:10:09.646 "is_configured": true, 00:10:09.646 "data_offset": 2048, 00:10:09.646 "data_size": 63488 00:10:09.646 }, 00:10:09.646 { 00:10:09.646 "name": "BaseBdev3", 00:10:09.646 "uuid": "5ac814ce-d1d1-552f-b7cc-b15c11748bbe", 00:10:09.646 "is_configured": true, 00:10:09.646 "data_offset": 2048, 00:10:09.646 "data_size": 63488 00:10:09.646 }, 00:10:09.646 { 00:10:09.646 "name": "BaseBdev4", 00:10:09.646 "uuid": "84d8ee34-65ee-5ac2-a866-d9933d639d4e", 00:10:09.646 "is_configured": true, 00:10:09.646 "data_offset": 2048, 00:10:09.646 "data_size": 63488 00:10:09.646 } 00:10:09.646 ] 00:10:09.646 }' 00:10:09.646 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.646 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.212 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:10.212 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.212 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.212 [2024-11-26 13:22:58.550954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:10.212 [2024-11-26 13:22:58.551969] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.212 [2024-11-26 13:22:58.554892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.212 [2024-11-26 13:22:58.555008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.212 [2024-11-26 13:22:58.555065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.212 [2024-11-26 13:22:58.555084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:10.212 { 00:10:10.212 "results": [ 00:10:10.212 { 00:10:10.212 "job": "raid_bdev1", 00:10:10.212 "core_mask": "0x1", 00:10:10.212 "workload": "randrw", 00:10:10.212 "percentage": 50, 00:10:10.212 "status": "finished", 00:10:10.212 "queue_depth": 1, 00:10:10.212 "io_size": 131072, 00:10:10.212 "runtime": 1.423079, 00:10:10.212 "iops": 13245.926614053049, 00:10:10.212 "mibps": 1655.7408267566311, 00:10:10.212 "io_failed": 1, 00:10:10.212 "io_timeout": 0, 00:10:10.212 "avg_latency_us": 105.50055294872227, 00:10:10.212 "min_latency_us": 33.97818181818182, 00:10:10.212 "max_latency_us": 1474.56 00:10:10.212 } 00:10:10.212 ], 00:10:10.212 "core_count": 1 00:10:10.212 } 00:10:10.212 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.212 13:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70677 00:10:10.212 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70677 ']' 00:10:10.212 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70677 00:10:10.212 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:10.212 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.212 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70677 00:10:10.212 killing process with pid 70677 00:10:10.212 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.212 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.213 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70677' 00:10:10.213 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70677 00:10:10.213 [2024-11-26 13:22:58.591478] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.213 13:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70677 00:10:10.471 [2024-11-26 13:22:58.810611] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:11.413 13:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EF1d1DFozq 00:10:11.413 13:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:11.413 13:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:11.413 13:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:11.413 13:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:11.413 13:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.413 13:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:11.413 13:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:11.413 00:10:11.413 real 0m4.465s 00:10:11.413 user 0m5.565s 00:10:11.413 sys 0m0.576s 00:10:11.413 ************************************ 00:10:11.413 END TEST raid_write_error_test 00:10:11.413 ************************************ 00:10:11.413 13:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.413 13:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.413 13:22:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:11.413 13:22:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:11.413 13:22:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:11.413 13:22:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.413 13:22:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.413 ************************************ 00:10:11.413 START TEST raid_state_function_test 00:10:11.413 ************************************ 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=70821 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70821' 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:11.413 Process raid pid: 70821 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 70821 00:10:11.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 70821 ']' 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.413 13:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.413 [2024-11-26 13:22:59.869566] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:10:11.413 [2024-11-26 13:22:59.869978] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.672 [2024-11-26 13:23:00.051477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.672 [2024-11-26 13:23:00.152356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.930 [2024-11-26 13:23:00.326377] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.930 [2024-11-26 13:23:00.326415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.189 13:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.189 13:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:12.189 13:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:12.189 13:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.189 13:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.447 [2024-11-26 13:23:00.755540] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.447 [2024-11-26 13:23:00.755614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.447 [2024-11-26 13:23:00.755630] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.447 [2024-11-26 13:23:00.755644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.447 [2024-11-26 13:23:00.755652] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.447 [2024-11-26 13:23:00.755664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.447 [2024-11-26 13:23:00.755672] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:12.447 [2024-11-26 13:23:00.755684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.447 "name": "Existed_Raid", 00:10:12.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.447 "strip_size_kb": 64, 00:10:12.447 "state": "configuring", 00:10:12.447 "raid_level": "concat", 00:10:12.447 "superblock": false, 00:10:12.447 "num_base_bdevs": 4, 00:10:12.447 "num_base_bdevs_discovered": 0, 00:10:12.447 "num_base_bdevs_operational": 4, 00:10:12.447 "base_bdevs_list": [ 00:10:12.447 { 00:10:12.447 "name": "BaseBdev1", 00:10:12.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.447 "is_configured": false, 00:10:12.447 "data_offset": 0, 00:10:12.447 "data_size": 0 00:10:12.447 }, 00:10:12.447 { 00:10:12.447 "name": "BaseBdev2", 00:10:12.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.447 "is_configured": false, 00:10:12.447 "data_offset": 0, 00:10:12.447 "data_size": 0 00:10:12.447 }, 00:10:12.447 { 00:10:12.447 "name": "BaseBdev3", 00:10:12.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.447 "is_configured": false, 00:10:12.447 "data_offset": 0, 00:10:12.447 "data_size": 0 00:10:12.447 }, 00:10:12.447 { 00:10:12.447 "name": "BaseBdev4", 00:10:12.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.447 "is_configured": false, 00:10:12.447 "data_offset": 0, 00:10:12.447 "data_size": 0 00:10:12.447 } 00:10:12.447 ] 00:10:12.447 }' 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.447 13:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.014 [2024-11-26 13:23:01.283600] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.014 [2024-11-26 13:23:01.283798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.014 [2024-11-26 13:23:01.291587] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.014 [2024-11-26 13:23:01.291648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.014 [2024-11-26 13:23:01.291661] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.014 [2024-11-26 13:23:01.291675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.014 [2024-11-26 13:23:01.291683] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.014 [2024-11-26 13:23:01.291694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.014 [2024-11-26 13:23:01.291702] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:13.014 [2024-11-26 13:23:01.291713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.014 [2024-11-26 13:23:01.330411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.014 BaseBdev1 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.014 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.014 [ 00:10:13.014 { 00:10:13.014 "name": "BaseBdev1", 00:10:13.014 "aliases": [ 00:10:13.014 "82457cb2-1c09-496f-a6f6-31829d6470dc" 00:10:13.014 ], 00:10:13.014 "product_name": "Malloc disk", 00:10:13.014 "block_size": 512, 00:10:13.014 "num_blocks": 65536, 00:10:13.014 "uuid": "82457cb2-1c09-496f-a6f6-31829d6470dc", 00:10:13.014 "assigned_rate_limits": { 00:10:13.014 "rw_ios_per_sec": 0, 00:10:13.014 "rw_mbytes_per_sec": 0, 00:10:13.014 "r_mbytes_per_sec": 0, 00:10:13.014 "w_mbytes_per_sec": 0 00:10:13.014 }, 00:10:13.014 "claimed": true, 00:10:13.014 "claim_type": "exclusive_write", 00:10:13.014 "zoned": false, 00:10:13.014 "supported_io_types": { 00:10:13.014 "read": true, 00:10:13.014 "write": true, 00:10:13.014 "unmap": true, 00:10:13.014 "flush": true, 00:10:13.014 "reset": true, 00:10:13.014 "nvme_admin": false, 00:10:13.014 "nvme_io": false, 00:10:13.014 "nvme_io_md": false, 00:10:13.014 "write_zeroes": true, 00:10:13.015 "zcopy": true, 00:10:13.015 "get_zone_info": false, 00:10:13.015 "zone_management": false, 00:10:13.015 "zone_append": false, 00:10:13.015 "compare": false, 00:10:13.015 "compare_and_write": false, 00:10:13.015 "abort": true, 00:10:13.015 "seek_hole": false, 00:10:13.015 "seek_data": false, 00:10:13.015 "copy": true, 00:10:13.015 "nvme_iov_md": false 00:10:13.015 }, 00:10:13.015 "memory_domains": [ 00:10:13.015 { 00:10:13.015 "dma_device_id": "system", 00:10:13.015 "dma_device_type": 1 00:10:13.015 }, 00:10:13.015 { 00:10:13.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.015 "dma_device_type": 2 00:10:13.015 } 00:10:13.015 ], 00:10:13.015 "driver_specific": {} 00:10:13.015 } 00:10:13.015 ] 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.015 "name": "Existed_Raid", 00:10:13.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.015 "strip_size_kb": 64, 00:10:13.015 "state": "configuring", 00:10:13.015 "raid_level": "concat", 00:10:13.015 "superblock": false, 00:10:13.015 "num_base_bdevs": 4, 00:10:13.015 "num_base_bdevs_discovered": 1, 00:10:13.015 "num_base_bdevs_operational": 4, 00:10:13.015 "base_bdevs_list": [ 00:10:13.015 { 00:10:13.015 "name": "BaseBdev1", 00:10:13.015 "uuid": "82457cb2-1c09-496f-a6f6-31829d6470dc", 00:10:13.015 "is_configured": true, 00:10:13.015 "data_offset": 0, 00:10:13.015 "data_size": 65536 00:10:13.015 }, 00:10:13.015 { 00:10:13.015 "name": "BaseBdev2", 00:10:13.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.015 "is_configured": false, 00:10:13.015 "data_offset": 0, 00:10:13.015 "data_size": 0 00:10:13.015 }, 00:10:13.015 { 00:10:13.015 "name": "BaseBdev3", 00:10:13.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.015 "is_configured": false, 00:10:13.015 "data_offset": 0, 00:10:13.015 "data_size": 0 00:10:13.015 }, 00:10:13.015 { 00:10:13.015 "name": "BaseBdev4", 00:10:13.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.015 "is_configured": false, 00:10:13.015 "data_offset": 0, 00:10:13.015 "data_size": 0 00:10:13.015 } 00:10:13.015 ] 00:10:13.015 }' 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.015 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.584 [2024-11-26 13:23:01.894574] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.584 [2024-11-26 13:23:01.894631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.584 [2024-11-26 13:23:01.906679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.584 [2024-11-26 13:23:01.908845] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.584 [2024-11-26 13:23:01.909029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.584 [2024-11-26 13:23:01.909169] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.584 [2024-11-26 13:23:01.909245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.584 [2024-11-26 13:23:01.909366] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:13.584 [2024-11-26 13:23:01.909422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.584 "name": "Existed_Raid", 00:10:13.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.584 "strip_size_kb": 64, 00:10:13.584 "state": "configuring", 00:10:13.584 "raid_level": "concat", 00:10:13.584 "superblock": false, 00:10:13.584 "num_base_bdevs": 4, 00:10:13.584 "num_base_bdevs_discovered": 1, 00:10:13.584 "num_base_bdevs_operational": 4, 00:10:13.584 "base_bdevs_list": [ 00:10:13.584 { 00:10:13.584 "name": "BaseBdev1", 00:10:13.584 "uuid": "82457cb2-1c09-496f-a6f6-31829d6470dc", 00:10:13.584 "is_configured": true, 00:10:13.584 "data_offset": 0, 00:10:13.584 "data_size": 65536 00:10:13.584 }, 00:10:13.584 { 00:10:13.584 "name": "BaseBdev2", 00:10:13.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.584 "is_configured": false, 00:10:13.584 "data_offset": 0, 00:10:13.584 "data_size": 0 00:10:13.584 }, 00:10:13.584 { 00:10:13.584 "name": "BaseBdev3", 00:10:13.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.584 "is_configured": false, 00:10:13.584 "data_offset": 0, 00:10:13.584 "data_size": 0 00:10:13.584 }, 00:10:13.584 { 00:10:13.584 "name": "BaseBdev4", 00:10:13.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.584 "is_configured": false, 00:10:13.584 "data_offset": 0, 00:10:13.584 "data_size": 0 00:10:13.584 } 00:10:13.584 ] 00:10:13.584 }' 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.584 13:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.153 [2024-11-26 13:23:02.473186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.153 BaseBdev2 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.153 [ 00:10:14.153 { 00:10:14.153 "name": "BaseBdev2", 00:10:14.153 "aliases": [ 00:10:14.153 "20ed1de2-15e4-4c02-88bb-ddd2245f46a9" 00:10:14.153 ], 00:10:14.153 "product_name": "Malloc disk", 00:10:14.153 "block_size": 512, 00:10:14.153 "num_blocks": 65536, 00:10:14.153 "uuid": "20ed1de2-15e4-4c02-88bb-ddd2245f46a9", 00:10:14.153 "assigned_rate_limits": { 00:10:14.153 "rw_ios_per_sec": 0, 00:10:14.153 "rw_mbytes_per_sec": 0, 00:10:14.153 "r_mbytes_per_sec": 0, 00:10:14.153 "w_mbytes_per_sec": 0 00:10:14.153 }, 00:10:14.153 "claimed": true, 00:10:14.153 "claim_type": "exclusive_write", 00:10:14.153 "zoned": false, 00:10:14.153 "supported_io_types": { 00:10:14.153 "read": true, 00:10:14.153 "write": true, 00:10:14.153 "unmap": true, 00:10:14.153 "flush": true, 00:10:14.153 "reset": true, 00:10:14.153 "nvme_admin": false, 00:10:14.153 "nvme_io": false, 00:10:14.153 "nvme_io_md": false, 00:10:14.153 "write_zeroes": true, 00:10:14.153 "zcopy": true, 00:10:14.153 "get_zone_info": false, 00:10:14.153 "zone_management": false, 00:10:14.153 "zone_append": false, 00:10:14.153 "compare": false, 00:10:14.153 "compare_and_write": false, 00:10:14.153 "abort": true, 00:10:14.153 "seek_hole": false, 00:10:14.153 "seek_data": false, 00:10:14.153 "copy": true, 00:10:14.153 "nvme_iov_md": false 00:10:14.153 }, 00:10:14.153 "memory_domains": [ 00:10:14.153 { 00:10:14.153 "dma_device_id": "system", 00:10:14.153 "dma_device_type": 1 00:10:14.153 }, 00:10:14.153 { 00:10:14.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.153 "dma_device_type": 2 00:10:14.153 } 00:10:14.153 ], 00:10:14.153 "driver_specific": {} 00:10:14.153 } 00:10:14.153 ] 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.153 "name": "Existed_Raid", 00:10:14.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.153 "strip_size_kb": 64, 00:10:14.153 "state": "configuring", 00:10:14.153 "raid_level": "concat", 00:10:14.153 "superblock": false, 00:10:14.153 "num_base_bdevs": 4, 00:10:14.153 "num_base_bdevs_discovered": 2, 00:10:14.153 "num_base_bdevs_operational": 4, 00:10:14.153 "base_bdevs_list": [ 00:10:14.153 { 00:10:14.153 "name": "BaseBdev1", 00:10:14.153 "uuid": "82457cb2-1c09-496f-a6f6-31829d6470dc", 00:10:14.153 "is_configured": true, 00:10:14.153 "data_offset": 0, 00:10:14.153 "data_size": 65536 00:10:14.153 }, 00:10:14.153 { 00:10:14.153 "name": "BaseBdev2", 00:10:14.153 "uuid": "20ed1de2-15e4-4c02-88bb-ddd2245f46a9", 00:10:14.153 "is_configured": true, 00:10:14.153 "data_offset": 0, 00:10:14.153 "data_size": 65536 00:10:14.153 }, 00:10:14.153 { 00:10:14.153 "name": "BaseBdev3", 00:10:14.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.153 "is_configured": false, 00:10:14.153 "data_offset": 0, 00:10:14.153 "data_size": 0 00:10:14.153 }, 00:10:14.153 { 00:10:14.153 "name": "BaseBdev4", 00:10:14.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.153 "is_configured": false, 00:10:14.153 "data_offset": 0, 00:10:14.153 "data_size": 0 00:10:14.153 } 00:10:14.153 ] 00:10:14.153 }' 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.153 13:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.722 [2024-11-26 13:23:03.068038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.722 BaseBdev3 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.722 [ 00:10:14.722 { 00:10:14.722 "name": "BaseBdev3", 00:10:14.722 "aliases": [ 00:10:14.722 "76ced4a7-5d09-4802-bb6e-ff53e520d0e1" 00:10:14.722 ], 00:10:14.722 "product_name": "Malloc disk", 00:10:14.722 "block_size": 512, 00:10:14.722 "num_blocks": 65536, 00:10:14.722 "uuid": "76ced4a7-5d09-4802-bb6e-ff53e520d0e1", 00:10:14.722 "assigned_rate_limits": { 00:10:14.722 "rw_ios_per_sec": 0, 00:10:14.722 "rw_mbytes_per_sec": 0, 00:10:14.722 "r_mbytes_per_sec": 0, 00:10:14.722 "w_mbytes_per_sec": 0 00:10:14.722 }, 00:10:14.722 "claimed": true, 00:10:14.722 "claim_type": "exclusive_write", 00:10:14.722 "zoned": false, 00:10:14.722 "supported_io_types": { 00:10:14.722 "read": true, 00:10:14.722 "write": true, 00:10:14.722 "unmap": true, 00:10:14.722 "flush": true, 00:10:14.722 "reset": true, 00:10:14.722 "nvme_admin": false, 00:10:14.722 "nvme_io": false, 00:10:14.722 "nvme_io_md": false, 00:10:14.722 "write_zeroes": true, 00:10:14.722 "zcopy": true, 00:10:14.722 "get_zone_info": false, 00:10:14.722 "zone_management": false, 00:10:14.722 "zone_append": false, 00:10:14.722 "compare": false, 00:10:14.722 "compare_and_write": false, 00:10:14.722 "abort": true, 00:10:14.722 "seek_hole": false, 00:10:14.722 "seek_data": false, 00:10:14.722 "copy": true, 00:10:14.722 "nvme_iov_md": false 00:10:14.722 }, 00:10:14.722 "memory_domains": [ 00:10:14.722 { 00:10:14.722 "dma_device_id": "system", 00:10:14.722 "dma_device_type": 1 00:10:14.722 }, 00:10:14.722 { 00:10:14.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.722 "dma_device_type": 2 00:10:14.722 } 00:10:14.722 ], 00:10:14.722 "driver_specific": {} 00:10:14.722 } 00:10:14.722 ] 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.722 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.722 "name": "Existed_Raid", 00:10:14.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.722 "strip_size_kb": 64, 00:10:14.722 "state": "configuring", 00:10:14.722 "raid_level": "concat", 00:10:14.722 "superblock": false, 00:10:14.722 "num_base_bdevs": 4, 00:10:14.722 "num_base_bdevs_discovered": 3, 00:10:14.722 "num_base_bdevs_operational": 4, 00:10:14.722 "base_bdevs_list": [ 00:10:14.722 { 00:10:14.722 "name": "BaseBdev1", 00:10:14.722 "uuid": "82457cb2-1c09-496f-a6f6-31829d6470dc", 00:10:14.722 "is_configured": true, 00:10:14.722 "data_offset": 0, 00:10:14.722 "data_size": 65536 00:10:14.722 }, 00:10:14.722 { 00:10:14.722 "name": "BaseBdev2", 00:10:14.722 "uuid": "20ed1de2-15e4-4c02-88bb-ddd2245f46a9", 00:10:14.723 "is_configured": true, 00:10:14.723 "data_offset": 0, 00:10:14.723 "data_size": 65536 00:10:14.723 }, 00:10:14.723 { 00:10:14.723 "name": "BaseBdev3", 00:10:14.723 "uuid": "76ced4a7-5d09-4802-bb6e-ff53e520d0e1", 00:10:14.723 "is_configured": true, 00:10:14.723 "data_offset": 0, 00:10:14.723 "data_size": 65536 00:10:14.723 }, 00:10:14.723 { 00:10:14.723 "name": "BaseBdev4", 00:10:14.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.723 "is_configured": false, 00:10:14.723 "data_offset": 0, 00:10:14.723 "data_size": 0 00:10:14.723 } 00:10:14.723 ] 00:10:14.723 }' 00:10:14.723 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.723 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.290 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:15.290 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.290 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.290 [2024-11-26 13:23:03.666989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:15.290 [2024-11-26 13:23:03.667036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:15.290 [2024-11-26 13:23:03.667047] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:15.290 [2024-11-26 13:23:03.667387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:15.290 [2024-11-26 13:23:03.667597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:15.290 [2024-11-26 13:23:03.667620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:15.290 [2024-11-26 13:23:03.667920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.290 BaseBdev4 00:10:15.290 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.290 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:15.290 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:15.290 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.290 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:15.290 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.291 [ 00:10:15.291 { 00:10:15.291 "name": "BaseBdev4", 00:10:15.291 "aliases": [ 00:10:15.291 "74128cc8-8488-43a4-bc3f-5088ca0f9045" 00:10:15.291 ], 00:10:15.291 "product_name": "Malloc disk", 00:10:15.291 "block_size": 512, 00:10:15.291 "num_blocks": 65536, 00:10:15.291 "uuid": "74128cc8-8488-43a4-bc3f-5088ca0f9045", 00:10:15.291 "assigned_rate_limits": { 00:10:15.291 "rw_ios_per_sec": 0, 00:10:15.291 "rw_mbytes_per_sec": 0, 00:10:15.291 "r_mbytes_per_sec": 0, 00:10:15.291 "w_mbytes_per_sec": 0 00:10:15.291 }, 00:10:15.291 "claimed": true, 00:10:15.291 "claim_type": "exclusive_write", 00:10:15.291 "zoned": false, 00:10:15.291 "supported_io_types": { 00:10:15.291 "read": true, 00:10:15.291 "write": true, 00:10:15.291 "unmap": true, 00:10:15.291 "flush": true, 00:10:15.291 "reset": true, 00:10:15.291 "nvme_admin": false, 00:10:15.291 "nvme_io": false, 00:10:15.291 "nvme_io_md": false, 00:10:15.291 "write_zeroes": true, 00:10:15.291 "zcopy": true, 00:10:15.291 "get_zone_info": false, 00:10:15.291 "zone_management": false, 00:10:15.291 "zone_append": false, 00:10:15.291 "compare": false, 00:10:15.291 "compare_and_write": false, 00:10:15.291 "abort": true, 00:10:15.291 "seek_hole": false, 00:10:15.291 "seek_data": false, 00:10:15.291 "copy": true, 00:10:15.291 "nvme_iov_md": false 00:10:15.291 }, 00:10:15.291 "memory_domains": [ 00:10:15.291 { 00:10:15.291 "dma_device_id": "system", 00:10:15.291 "dma_device_type": 1 00:10:15.291 }, 00:10:15.291 { 00:10:15.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.291 "dma_device_type": 2 00:10:15.291 } 00:10:15.291 ], 00:10:15.291 "driver_specific": {} 00:10:15.291 } 00:10:15.291 ] 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.291 "name": "Existed_Raid", 00:10:15.291 "uuid": "04ffa16d-c5cc-4694-97f5-9e81ef380678", 00:10:15.291 "strip_size_kb": 64, 00:10:15.291 "state": "online", 00:10:15.291 "raid_level": "concat", 00:10:15.291 "superblock": false, 00:10:15.291 "num_base_bdevs": 4, 00:10:15.291 "num_base_bdevs_discovered": 4, 00:10:15.291 "num_base_bdevs_operational": 4, 00:10:15.291 "base_bdevs_list": [ 00:10:15.291 { 00:10:15.291 "name": "BaseBdev1", 00:10:15.291 "uuid": "82457cb2-1c09-496f-a6f6-31829d6470dc", 00:10:15.291 "is_configured": true, 00:10:15.291 "data_offset": 0, 00:10:15.291 "data_size": 65536 00:10:15.291 }, 00:10:15.291 { 00:10:15.291 "name": "BaseBdev2", 00:10:15.291 "uuid": "20ed1de2-15e4-4c02-88bb-ddd2245f46a9", 00:10:15.291 "is_configured": true, 00:10:15.291 "data_offset": 0, 00:10:15.291 "data_size": 65536 00:10:15.291 }, 00:10:15.291 { 00:10:15.291 "name": "BaseBdev3", 00:10:15.291 "uuid": "76ced4a7-5d09-4802-bb6e-ff53e520d0e1", 00:10:15.291 "is_configured": true, 00:10:15.291 "data_offset": 0, 00:10:15.291 "data_size": 65536 00:10:15.291 }, 00:10:15.291 { 00:10:15.291 "name": "BaseBdev4", 00:10:15.291 "uuid": "74128cc8-8488-43a4-bc3f-5088ca0f9045", 00:10:15.291 "is_configured": true, 00:10:15.291 "data_offset": 0, 00:10:15.291 "data_size": 65536 00:10:15.291 } 00:10:15.291 ] 00:10:15.291 }' 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.291 13:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.860 [2024-11-26 13:23:04.215468] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.860 "name": "Existed_Raid", 00:10:15.860 "aliases": [ 00:10:15.860 "04ffa16d-c5cc-4694-97f5-9e81ef380678" 00:10:15.860 ], 00:10:15.860 "product_name": "Raid Volume", 00:10:15.860 "block_size": 512, 00:10:15.860 "num_blocks": 262144, 00:10:15.860 "uuid": "04ffa16d-c5cc-4694-97f5-9e81ef380678", 00:10:15.860 "assigned_rate_limits": { 00:10:15.860 "rw_ios_per_sec": 0, 00:10:15.860 "rw_mbytes_per_sec": 0, 00:10:15.860 "r_mbytes_per_sec": 0, 00:10:15.860 "w_mbytes_per_sec": 0 00:10:15.860 }, 00:10:15.860 "claimed": false, 00:10:15.860 "zoned": false, 00:10:15.860 "supported_io_types": { 00:10:15.860 "read": true, 00:10:15.860 "write": true, 00:10:15.860 "unmap": true, 00:10:15.860 "flush": true, 00:10:15.860 "reset": true, 00:10:15.860 "nvme_admin": false, 00:10:15.860 "nvme_io": false, 00:10:15.860 "nvme_io_md": false, 00:10:15.860 "write_zeroes": true, 00:10:15.860 "zcopy": false, 00:10:15.860 "get_zone_info": false, 00:10:15.860 "zone_management": false, 00:10:15.860 "zone_append": false, 00:10:15.860 "compare": false, 00:10:15.860 "compare_and_write": false, 00:10:15.860 "abort": false, 00:10:15.860 "seek_hole": false, 00:10:15.860 "seek_data": false, 00:10:15.860 "copy": false, 00:10:15.860 "nvme_iov_md": false 00:10:15.860 }, 00:10:15.860 "memory_domains": [ 00:10:15.860 { 00:10:15.860 "dma_device_id": "system", 00:10:15.860 "dma_device_type": 1 00:10:15.860 }, 00:10:15.860 { 00:10:15.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.860 "dma_device_type": 2 00:10:15.860 }, 00:10:15.860 { 00:10:15.860 "dma_device_id": "system", 00:10:15.860 "dma_device_type": 1 00:10:15.860 }, 00:10:15.860 { 00:10:15.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.860 "dma_device_type": 2 00:10:15.860 }, 00:10:15.860 { 00:10:15.860 "dma_device_id": "system", 00:10:15.860 "dma_device_type": 1 00:10:15.860 }, 00:10:15.860 { 00:10:15.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.860 "dma_device_type": 2 00:10:15.860 }, 00:10:15.860 { 00:10:15.860 "dma_device_id": "system", 00:10:15.860 "dma_device_type": 1 00:10:15.860 }, 00:10:15.860 { 00:10:15.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.860 "dma_device_type": 2 00:10:15.860 } 00:10:15.860 ], 00:10:15.860 "driver_specific": { 00:10:15.860 "raid": { 00:10:15.860 "uuid": "04ffa16d-c5cc-4694-97f5-9e81ef380678", 00:10:15.860 "strip_size_kb": 64, 00:10:15.860 "state": "online", 00:10:15.860 "raid_level": "concat", 00:10:15.860 "superblock": false, 00:10:15.860 "num_base_bdevs": 4, 00:10:15.860 "num_base_bdevs_discovered": 4, 00:10:15.860 "num_base_bdevs_operational": 4, 00:10:15.860 "base_bdevs_list": [ 00:10:15.860 { 00:10:15.860 "name": "BaseBdev1", 00:10:15.860 "uuid": "82457cb2-1c09-496f-a6f6-31829d6470dc", 00:10:15.860 "is_configured": true, 00:10:15.860 "data_offset": 0, 00:10:15.860 "data_size": 65536 00:10:15.860 }, 00:10:15.860 { 00:10:15.860 "name": "BaseBdev2", 00:10:15.860 "uuid": "20ed1de2-15e4-4c02-88bb-ddd2245f46a9", 00:10:15.860 "is_configured": true, 00:10:15.860 "data_offset": 0, 00:10:15.860 "data_size": 65536 00:10:15.860 }, 00:10:15.860 { 00:10:15.860 "name": "BaseBdev3", 00:10:15.860 "uuid": "76ced4a7-5d09-4802-bb6e-ff53e520d0e1", 00:10:15.860 "is_configured": true, 00:10:15.860 "data_offset": 0, 00:10:15.860 "data_size": 65536 00:10:15.860 }, 00:10:15.860 { 00:10:15.860 "name": "BaseBdev4", 00:10:15.860 "uuid": "74128cc8-8488-43a4-bc3f-5088ca0f9045", 00:10:15.860 "is_configured": true, 00:10:15.860 "data_offset": 0, 00:10:15.860 "data_size": 65536 00:10:15.860 } 00:10:15.860 ] 00:10:15.860 } 00:10:15.860 } 00:10:15.860 }' 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:15.860 BaseBdev2 00:10:15.860 BaseBdev3 00:10:15.860 BaseBdev4' 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.860 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.120 [2024-11-26 13:23:04.591310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:16.120 [2024-11-26 13:23:04.591342] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.120 [2024-11-26 13:23:04.591390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.120 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.379 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.379 "name": "Existed_Raid", 00:10:16.379 "uuid": "04ffa16d-c5cc-4694-97f5-9e81ef380678", 00:10:16.379 "strip_size_kb": 64, 00:10:16.379 "state": "offline", 00:10:16.379 "raid_level": "concat", 00:10:16.379 "superblock": false, 00:10:16.379 "num_base_bdevs": 4, 00:10:16.379 "num_base_bdevs_discovered": 3, 00:10:16.379 "num_base_bdevs_operational": 3, 00:10:16.379 "base_bdevs_list": [ 00:10:16.379 { 00:10:16.379 "name": null, 00:10:16.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.379 "is_configured": false, 00:10:16.379 "data_offset": 0, 00:10:16.379 "data_size": 65536 00:10:16.379 }, 00:10:16.379 { 00:10:16.379 "name": "BaseBdev2", 00:10:16.379 "uuid": "20ed1de2-15e4-4c02-88bb-ddd2245f46a9", 00:10:16.379 "is_configured": true, 00:10:16.379 "data_offset": 0, 00:10:16.379 "data_size": 65536 00:10:16.379 }, 00:10:16.379 { 00:10:16.379 "name": "BaseBdev3", 00:10:16.379 "uuid": "76ced4a7-5d09-4802-bb6e-ff53e520d0e1", 00:10:16.379 "is_configured": true, 00:10:16.379 "data_offset": 0, 00:10:16.379 "data_size": 65536 00:10:16.379 }, 00:10:16.379 { 00:10:16.379 "name": "BaseBdev4", 00:10:16.379 "uuid": "74128cc8-8488-43a4-bc3f-5088ca0f9045", 00:10:16.379 "is_configured": true, 00:10:16.379 "data_offset": 0, 00:10:16.379 "data_size": 65536 00:10:16.379 } 00:10:16.379 ] 00:10:16.379 }' 00:10:16.379 13:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.379 13:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.638 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:16.638 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.638 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.638 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.638 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.638 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.638 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.897 [2024-11-26 13:23:05.227323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.897 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:16.898 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.898 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.898 [2024-11-26 13:23:05.352201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.898 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.898 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.898 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.898 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.898 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.898 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.898 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.898 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.157 [2024-11-26 13:23:05.480776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:17.157 [2024-11-26 13:23:05.480972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.157 BaseBdev2 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.157 [ 00:10:17.157 { 00:10:17.157 "name": "BaseBdev2", 00:10:17.157 "aliases": [ 00:10:17.157 "ab0fdcd8-8714-4234-8d71-9dfc1f490cb0" 00:10:17.157 ], 00:10:17.157 "product_name": "Malloc disk", 00:10:17.157 "block_size": 512, 00:10:17.157 "num_blocks": 65536, 00:10:17.157 "uuid": "ab0fdcd8-8714-4234-8d71-9dfc1f490cb0", 00:10:17.157 "assigned_rate_limits": { 00:10:17.157 "rw_ios_per_sec": 0, 00:10:17.157 "rw_mbytes_per_sec": 0, 00:10:17.157 "r_mbytes_per_sec": 0, 00:10:17.157 "w_mbytes_per_sec": 0 00:10:17.157 }, 00:10:17.157 "claimed": false, 00:10:17.157 "zoned": false, 00:10:17.157 "supported_io_types": { 00:10:17.157 "read": true, 00:10:17.157 "write": true, 00:10:17.157 "unmap": true, 00:10:17.157 "flush": true, 00:10:17.157 "reset": true, 00:10:17.157 "nvme_admin": false, 00:10:17.157 "nvme_io": false, 00:10:17.157 "nvme_io_md": false, 00:10:17.157 "write_zeroes": true, 00:10:17.157 "zcopy": true, 00:10:17.157 "get_zone_info": false, 00:10:17.157 "zone_management": false, 00:10:17.157 "zone_append": false, 00:10:17.157 "compare": false, 00:10:17.157 "compare_and_write": false, 00:10:17.157 "abort": true, 00:10:17.157 "seek_hole": false, 00:10:17.157 "seek_data": false, 00:10:17.157 "copy": true, 00:10:17.157 "nvme_iov_md": false 00:10:17.157 }, 00:10:17.157 "memory_domains": [ 00:10:17.157 { 00:10:17.157 "dma_device_id": "system", 00:10:17.157 "dma_device_type": 1 00:10:17.157 }, 00:10:17.157 { 00:10:17.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.157 "dma_device_type": 2 00:10:17.157 } 00:10:17.157 ], 00:10:17.157 "driver_specific": {} 00:10:17.157 } 00:10:17.157 ] 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.157 BaseBdev3 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.157 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.435 [ 00:10:17.435 { 00:10:17.435 "name": "BaseBdev3", 00:10:17.435 "aliases": [ 00:10:17.435 "e53fd995-0c81-47e3-9023-c81258c4070e" 00:10:17.435 ], 00:10:17.435 "product_name": "Malloc disk", 00:10:17.435 "block_size": 512, 00:10:17.435 "num_blocks": 65536, 00:10:17.435 "uuid": "e53fd995-0c81-47e3-9023-c81258c4070e", 00:10:17.435 "assigned_rate_limits": { 00:10:17.435 "rw_ios_per_sec": 0, 00:10:17.435 "rw_mbytes_per_sec": 0, 00:10:17.435 "r_mbytes_per_sec": 0, 00:10:17.435 "w_mbytes_per_sec": 0 00:10:17.435 }, 00:10:17.435 "claimed": false, 00:10:17.435 "zoned": false, 00:10:17.435 "supported_io_types": { 00:10:17.435 "read": true, 00:10:17.435 "write": true, 00:10:17.435 "unmap": true, 00:10:17.435 "flush": true, 00:10:17.435 "reset": true, 00:10:17.435 "nvme_admin": false, 00:10:17.435 "nvme_io": false, 00:10:17.435 "nvme_io_md": false, 00:10:17.435 "write_zeroes": true, 00:10:17.435 "zcopy": true, 00:10:17.435 "get_zone_info": false, 00:10:17.435 "zone_management": false, 00:10:17.435 "zone_append": false, 00:10:17.435 "compare": false, 00:10:17.435 "compare_and_write": false, 00:10:17.435 "abort": true, 00:10:17.435 "seek_hole": false, 00:10:17.435 "seek_data": false, 00:10:17.435 "copy": true, 00:10:17.435 "nvme_iov_md": false 00:10:17.435 }, 00:10:17.435 "memory_domains": [ 00:10:17.435 { 00:10:17.435 "dma_device_id": "system", 00:10:17.435 "dma_device_type": 1 00:10:17.435 }, 00:10:17.435 { 00:10:17.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.435 "dma_device_type": 2 00:10:17.435 } 00:10:17.435 ], 00:10:17.435 "driver_specific": {} 00:10:17.435 } 00:10:17.435 ] 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.435 BaseBdev4 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.435 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.435 [ 00:10:17.435 { 00:10:17.435 "name": "BaseBdev4", 00:10:17.435 "aliases": [ 00:10:17.435 "835d9e70-d5c9-41ed-b86b-9b69897723a6" 00:10:17.435 ], 00:10:17.435 "product_name": "Malloc disk", 00:10:17.435 "block_size": 512, 00:10:17.435 "num_blocks": 65536, 00:10:17.435 "uuid": "835d9e70-d5c9-41ed-b86b-9b69897723a6", 00:10:17.435 "assigned_rate_limits": { 00:10:17.435 "rw_ios_per_sec": 0, 00:10:17.435 "rw_mbytes_per_sec": 0, 00:10:17.435 "r_mbytes_per_sec": 0, 00:10:17.435 "w_mbytes_per_sec": 0 00:10:17.435 }, 00:10:17.435 "claimed": false, 00:10:17.435 "zoned": false, 00:10:17.435 "supported_io_types": { 00:10:17.435 "read": true, 00:10:17.435 "write": true, 00:10:17.435 "unmap": true, 00:10:17.435 "flush": true, 00:10:17.435 "reset": true, 00:10:17.435 "nvme_admin": false, 00:10:17.435 "nvme_io": false, 00:10:17.435 "nvme_io_md": false, 00:10:17.435 "write_zeroes": true, 00:10:17.435 "zcopy": true, 00:10:17.435 "get_zone_info": false, 00:10:17.435 "zone_management": false, 00:10:17.435 "zone_append": false, 00:10:17.435 "compare": false, 00:10:17.435 "compare_and_write": false, 00:10:17.435 "abort": true, 00:10:17.435 "seek_hole": false, 00:10:17.435 "seek_data": false, 00:10:17.435 "copy": true, 00:10:17.435 "nvme_iov_md": false 00:10:17.435 }, 00:10:17.435 "memory_domains": [ 00:10:17.435 { 00:10:17.435 "dma_device_id": "system", 00:10:17.435 "dma_device_type": 1 00:10:17.435 }, 00:10:17.435 { 00:10:17.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.436 "dma_device_type": 2 00:10:17.436 } 00:10:17.436 ], 00:10:17.436 "driver_specific": {} 00:10:17.436 } 00:10:17.436 ] 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.436 [2024-11-26 13:23:05.815241] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.436 [2024-11-26 13:23:05.815305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.436 [2024-11-26 13:23:05.815334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.436 [2024-11-26 13:23:05.817392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.436 [2024-11-26 13:23:05.817457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.436 "name": "Existed_Raid", 00:10:17.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.436 "strip_size_kb": 64, 00:10:17.436 "state": "configuring", 00:10:17.436 "raid_level": "concat", 00:10:17.436 "superblock": false, 00:10:17.436 "num_base_bdevs": 4, 00:10:17.436 "num_base_bdevs_discovered": 3, 00:10:17.436 "num_base_bdevs_operational": 4, 00:10:17.436 "base_bdevs_list": [ 00:10:17.436 { 00:10:17.436 "name": "BaseBdev1", 00:10:17.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.436 "is_configured": false, 00:10:17.436 "data_offset": 0, 00:10:17.436 "data_size": 0 00:10:17.436 }, 00:10:17.436 { 00:10:17.436 "name": "BaseBdev2", 00:10:17.436 "uuid": "ab0fdcd8-8714-4234-8d71-9dfc1f490cb0", 00:10:17.436 "is_configured": true, 00:10:17.436 "data_offset": 0, 00:10:17.436 "data_size": 65536 00:10:17.436 }, 00:10:17.436 { 00:10:17.436 "name": "BaseBdev3", 00:10:17.436 "uuid": "e53fd995-0c81-47e3-9023-c81258c4070e", 00:10:17.436 "is_configured": true, 00:10:17.436 "data_offset": 0, 00:10:17.436 "data_size": 65536 00:10:17.436 }, 00:10:17.436 { 00:10:17.436 "name": "BaseBdev4", 00:10:17.436 "uuid": "835d9e70-d5c9-41ed-b86b-9b69897723a6", 00:10:17.436 "is_configured": true, 00:10:17.436 "data_offset": 0, 00:10:17.436 "data_size": 65536 00:10:17.436 } 00:10:17.436 ] 00:10:17.436 }' 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.436 13:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.005 [2024-11-26 13:23:06.339333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.005 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.005 "name": "Existed_Raid", 00:10:18.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.005 "strip_size_kb": 64, 00:10:18.005 "state": "configuring", 00:10:18.005 "raid_level": "concat", 00:10:18.005 "superblock": false, 00:10:18.005 "num_base_bdevs": 4, 00:10:18.005 "num_base_bdevs_discovered": 2, 00:10:18.005 "num_base_bdevs_operational": 4, 00:10:18.005 "base_bdevs_list": [ 00:10:18.005 { 00:10:18.005 "name": "BaseBdev1", 00:10:18.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.005 "is_configured": false, 00:10:18.005 "data_offset": 0, 00:10:18.005 "data_size": 0 00:10:18.005 }, 00:10:18.005 { 00:10:18.005 "name": null, 00:10:18.005 "uuid": "ab0fdcd8-8714-4234-8d71-9dfc1f490cb0", 00:10:18.005 "is_configured": false, 00:10:18.005 "data_offset": 0, 00:10:18.005 "data_size": 65536 00:10:18.005 }, 00:10:18.005 { 00:10:18.005 "name": "BaseBdev3", 00:10:18.006 "uuid": "e53fd995-0c81-47e3-9023-c81258c4070e", 00:10:18.006 "is_configured": true, 00:10:18.006 "data_offset": 0, 00:10:18.006 "data_size": 65536 00:10:18.006 }, 00:10:18.006 { 00:10:18.006 "name": "BaseBdev4", 00:10:18.006 "uuid": "835d9e70-d5c9-41ed-b86b-9b69897723a6", 00:10:18.006 "is_configured": true, 00:10:18.006 "data_offset": 0, 00:10:18.006 "data_size": 65536 00:10:18.006 } 00:10:18.006 ] 00:10:18.006 }' 00:10:18.006 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.006 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.574 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.574 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.574 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.574 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:18.574 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.574 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:18.574 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.575 [2024-11-26 13:23:06.939784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.575 BaseBdev1 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.575 [ 00:10:18.575 { 00:10:18.575 "name": "BaseBdev1", 00:10:18.575 "aliases": [ 00:10:18.575 "a3af98f8-e19d-454e-ab64-e9d7548bc469" 00:10:18.575 ], 00:10:18.575 "product_name": "Malloc disk", 00:10:18.575 "block_size": 512, 00:10:18.575 "num_blocks": 65536, 00:10:18.575 "uuid": "a3af98f8-e19d-454e-ab64-e9d7548bc469", 00:10:18.575 "assigned_rate_limits": { 00:10:18.575 "rw_ios_per_sec": 0, 00:10:18.575 "rw_mbytes_per_sec": 0, 00:10:18.575 "r_mbytes_per_sec": 0, 00:10:18.575 "w_mbytes_per_sec": 0 00:10:18.575 }, 00:10:18.575 "claimed": true, 00:10:18.575 "claim_type": "exclusive_write", 00:10:18.575 "zoned": false, 00:10:18.575 "supported_io_types": { 00:10:18.575 "read": true, 00:10:18.575 "write": true, 00:10:18.575 "unmap": true, 00:10:18.575 "flush": true, 00:10:18.575 "reset": true, 00:10:18.575 "nvme_admin": false, 00:10:18.575 "nvme_io": false, 00:10:18.575 "nvme_io_md": false, 00:10:18.575 "write_zeroes": true, 00:10:18.575 "zcopy": true, 00:10:18.575 "get_zone_info": false, 00:10:18.575 "zone_management": false, 00:10:18.575 "zone_append": false, 00:10:18.575 "compare": false, 00:10:18.575 "compare_and_write": false, 00:10:18.575 "abort": true, 00:10:18.575 "seek_hole": false, 00:10:18.575 "seek_data": false, 00:10:18.575 "copy": true, 00:10:18.575 "nvme_iov_md": false 00:10:18.575 }, 00:10:18.575 "memory_domains": [ 00:10:18.575 { 00:10:18.575 "dma_device_id": "system", 00:10:18.575 "dma_device_type": 1 00:10:18.575 }, 00:10:18.575 { 00:10:18.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.575 "dma_device_type": 2 00:10:18.575 } 00:10:18.575 ], 00:10:18.575 "driver_specific": {} 00:10:18.575 } 00:10:18.575 ] 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.575 13:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.575 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.575 "name": "Existed_Raid", 00:10:18.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.575 "strip_size_kb": 64, 00:10:18.575 "state": "configuring", 00:10:18.575 "raid_level": "concat", 00:10:18.575 "superblock": false, 00:10:18.575 "num_base_bdevs": 4, 00:10:18.575 "num_base_bdevs_discovered": 3, 00:10:18.575 "num_base_bdevs_operational": 4, 00:10:18.575 "base_bdevs_list": [ 00:10:18.575 { 00:10:18.575 "name": "BaseBdev1", 00:10:18.575 "uuid": "a3af98f8-e19d-454e-ab64-e9d7548bc469", 00:10:18.575 "is_configured": true, 00:10:18.575 "data_offset": 0, 00:10:18.575 "data_size": 65536 00:10:18.575 }, 00:10:18.575 { 00:10:18.575 "name": null, 00:10:18.575 "uuid": "ab0fdcd8-8714-4234-8d71-9dfc1f490cb0", 00:10:18.575 "is_configured": false, 00:10:18.575 "data_offset": 0, 00:10:18.575 "data_size": 65536 00:10:18.575 }, 00:10:18.575 { 00:10:18.575 "name": "BaseBdev3", 00:10:18.575 "uuid": "e53fd995-0c81-47e3-9023-c81258c4070e", 00:10:18.575 "is_configured": true, 00:10:18.575 "data_offset": 0, 00:10:18.575 "data_size": 65536 00:10:18.575 }, 00:10:18.575 { 00:10:18.575 "name": "BaseBdev4", 00:10:18.575 "uuid": "835d9e70-d5c9-41ed-b86b-9b69897723a6", 00:10:18.575 "is_configured": true, 00:10:18.575 "data_offset": 0, 00:10:18.575 "data_size": 65536 00:10:18.575 } 00:10:18.575 ] 00:10:18.575 }' 00:10:18.575 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.575 13:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.144 [2024-11-26 13:23:07.539959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.144 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.144 "name": "Existed_Raid", 00:10:19.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.144 "strip_size_kb": 64, 00:10:19.144 "state": "configuring", 00:10:19.144 "raid_level": "concat", 00:10:19.144 "superblock": false, 00:10:19.144 "num_base_bdevs": 4, 00:10:19.144 "num_base_bdevs_discovered": 2, 00:10:19.144 "num_base_bdevs_operational": 4, 00:10:19.144 "base_bdevs_list": [ 00:10:19.144 { 00:10:19.144 "name": "BaseBdev1", 00:10:19.144 "uuid": "a3af98f8-e19d-454e-ab64-e9d7548bc469", 00:10:19.144 "is_configured": true, 00:10:19.144 "data_offset": 0, 00:10:19.144 "data_size": 65536 00:10:19.144 }, 00:10:19.144 { 00:10:19.145 "name": null, 00:10:19.145 "uuid": "ab0fdcd8-8714-4234-8d71-9dfc1f490cb0", 00:10:19.145 "is_configured": false, 00:10:19.145 "data_offset": 0, 00:10:19.145 "data_size": 65536 00:10:19.145 }, 00:10:19.145 { 00:10:19.145 "name": null, 00:10:19.145 "uuid": "e53fd995-0c81-47e3-9023-c81258c4070e", 00:10:19.145 "is_configured": false, 00:10:19.145 "data_offset": 0, 00:10:19.145 "data_size": 65536 00:10:19.145 }, 00:10:19.145 { 00:10:19.145 "name": "BaseBdev4", 00:10:19.145 "uuid": "835d9e70-d5c9-41ed-b86b-9b69897723a6", 00:10:19.145 "is_configured": true, 00:10:19.145 "data_offset": 0, 00:10:19.145 "data_size": 65536 00:10:19.145 } 00:10:19.145 ] 00:10:19.145 }' 00:10:19.145 13:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.145 13:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.713 [2024-11-26 13:23:08.104064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.713 "name": "Existed_Raid", 00:10:19.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.713 "strip_size_kb": 64, 00:10:19.713 "state": "configuring", 00:10:19.713 "raid_level": "concat", 00:10:19.713 "superblock": false, 00:10:19.713 "num_base_bdevs": 4, 00:10:19.713 "num_base_bdevs_discovered": 3, 00:10:19.713 "num_base_bdevs_operational": 4, 00:10:19.713 "base_bdevs_list": [ 00:10:19.713 { 00:10:19.713 "name": "BaseBdev1", 00:10:19.713 "uuid": "a3af98f8-e19d-454e-ab64-e9d7548bc469", 00:10:19.713 "is_configured": true, 00:10:19.713 "data_offset": 0, 00:10:19.713 "data_size": 65536 00:10:19.713 }, 00:10:19.713 { 00:10:19.713 "name": null, 00:10:19.713 "uuid": "ab0fdcd8-8714-4234-8d71-9dfc1f490cb0", 00:10:19.713 "is_configured": false, 00:10:19.713 "data_offset": 0, 00:10:19.713 "data_size": 65536 00:10:19.713 }, 00:10:19.713 { 00:10:19.713 "name": "BaseBdev3", 00:10:19.713 "uuid": "e53fd995-0c81-47e3-9023-c81258c4070e", 00:10:19.713 "is_configured": true, 00:10:19.713 "data_offset": 0, 00:10:19.713 "data_size": 65536 00:10:19.713 }, 00:10:19.713 { 00:10:19.713 "name": "BaseBdev4", 00:10:19.713 "uuid": "835d9e70-d5c9-41ed-b86b-9b69897723a6", 00:10:19.713 "is_configured": true, 00:10:19.713 "data_offset": 0, 00:10:19.713 "data_size": 65536 00:10:19.713 } 00:10:19.713 ] 00:10:19.713 }' 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.713 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.282 [2024-11-26 13:23:08.676197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.282 "name": "Existed_Raid", 00:10:20.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.282 "strip_size_kb": 64, 00:10:20.282 "state": "configuring", 00:10:20.282 "raid_level": "concat", 00:10:20.282 "superblock": false, 00:10:20.282 "num_base_bdevs": 4, 00:10:20.282 "num_base_bdevs_discovered": 2, 00:10:20.282 "num_base_bdevs_operational": 4, 00:10:20.282 "base_bdevs_list": [ 00:10:20.282 { 00:10:20.282 "name": null, 00:10:20.282 "uuid": "a3af98f8-e19d-454e-ab64-e9d7548bc469", 00:10:20.282 "is_configured": false, 00:10:20.282 "data_offset": 0, 00:10:20.282 "data_size": 65536 00:10:20.282 }, 00:10:20.282 { 00:10:20.282 "name": null, 00:10:20.282 "uuid": "ab0fdcd8-8714-4234-8d71-9dfc1f490cb0", 00:10:20.282 "is_configured": false, 00:10:20.282 "data_offset": 0, 00:10:20.282 "data_size": 65536 00:10:20.282 }, 00:10:20.282 { 00:10:20.282 "name": "BaseBdev3", 00:10:20.282 "uuid": "e53fd995-0c81-47e3-9023-c81258c4070e", 00:10:20.282 "is_configured": true, 00:10:20.282 "data_offset": 0, 00:10:20.282 "data_size": 65536 00:10:20.282 }, 00:10:20.282 { 00:10:20.282 "name": "BaseBdev4", 00:10:20.282 "uuid": "835d9e70-d5c9-41ed-b86b-9b69897723a6", 00:10:20.282 "is_configured": true, 00:10:20.282 "data_offset": 0, 00:10:20.282 "data_size": 65536 00:10:20.282 } 00:10:20.282 ] 00:10:20.282 }' 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.282 13:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.867 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.867 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.867 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:20.867 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.867 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.867 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:20.867 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:20.867 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.867 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.868 [2024-11-26 13:23:09.310930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.868 "name": "Existed_Raid", 00:10:20.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.868 "strip_size_kb": 64, 00:10:20.868 "state": "configuring", 00:10:20.868 "raid_level": "concat", 00:10:20.868 "superblock": false, 00:10:20.868 "num_base_bdevs": 4, 00:10:20.868 "num_base_bdevs_discovered": 3, 00:10:20.868 "num_base_bdevs_operational": 4, 00:10:20.868 "base_bdevs_list": [ 00:10:20.868 { 00:10:20.868 "name": null, 00:10:20.868 "uuid": "a3af98f8-e19d-454e-ab64-e9d7548bc469", 00:10:20.868 "is_configured": false, 00:10:20.868 "data_offset": 0, 00:10:20.868 "data_size": 65536 00:10:20.868 }, 00:10:20.868 { 00:10:20.868 "name": "BaseBdev2", 00:10:20.868 "uuid": "ab0fdcd8-8714-4234-8d71-9dfc1f490cb0", 00:10:20.868 "is_configured": true, 00:10:20.868 "data_offset": 0, 00:10:20.868 "data_size": 65536 00:10:20.868 }, 00:10:20.868 { 00:10:20.868 "name": "BaseBdev3", 00:10:20.868 "uuid": "e53fd995-0c81-47e3-9023-c81258c4070e", 00:10:20.868 "is_configured": true, 00:10:20.868 "data_offset": 0, 00:10:20.868 "data_size": 65536 00:10:20.868 }, 00:10:20.868 { 00:10:20.868 "name": "BaseBdev4", 00:10:20.868 "uuid": "835d9e70-d5c9-41ed-b86b-9b69897723a6", 00:10:20.868 "is_configured": true, 00:10:20.868 "data_offset": 0, 00:10:20.868 "data_size": 65536 00:10:20.868 } 00:10:20.868 ] 00:10:20.868 }' 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.868 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a3af98f8-e19d-454e-ab64-e9d7548bc469 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 [2024-11-26 13:23:09.960195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:21.439 [2024-11-26 13:23:09.960281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:21.439 [2024-11-26 13:23:09.960293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:21.439 [2024-11-26 13:23:09.960584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:21.439 [2024-11-26 13:23:09.960762] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:21.439 [2024-11-26 13:23:09.960789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:21.439 [2024-11-26 13:23:09.961065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.439 NewBaseBdev 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.439 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 [ 00:10:21.439 { 00:10:21.439 "name": "NewBaseBdev", 00:10:21.439 "aliases": [ 00:10:21.439 "a3af98f8-e19d-454e-ab64-e9d7548bc469" 00:10:21.439 ], 00:10:21.439 "product_name": "Malloc disk", 00:10:21.440 "block_size": 512, 00:10:21.440 "num_blocks": 65536, 00:10:21.440 "uuid": "a3af98f8-e19d-454e-ab64-e9d7548bc469", 00:10:21.440 "assigned_rate_limits": { 00:10:21.440 "rw_ios_per_sec": 0, 00:10:21.440 "rw_mbytes_per_sec": 0, 00:10:21.440 "r_mbytes_per_sec": 0, 00:10:21.440 "w_mbytes_per_sec": 0 00:10:21.440 }, 00:10:21.440 "claimed": true, 00:10:21.440 "claim_type": "exclusive_write", 00:10:21.440 "zoned": false, 00:10:21.440 "supported_io_types": { 00:10:21.440 "read": true, 00:10:21.440 "write": true, 00:10:21.440 "unmap": true, 00:10:21.440 "flush": true, 00:10:21.440 "reset": true, 00:10:21.440 "nvme_admin": false, 00:10:21.440 "nvme_io": false, 00:10:21.440 "nvme_io_md": false, 00:10:21.440 "write_zeroes": true, 00:10:21.440 "zcopy": true, 00:10:21.440 "get_zone_info": false, 00:10:21.440 "zone_management": false, 00:10:21.440 "zone_append": false, 00:10:21.440 "compare": false, 00:10:21.440 "compare_and_write": false, 00:10:21.440 "abort": true, 00:10:21.440 "seek_hole": false, 00:10:21.440 "seek_data": false, 00:10:21.440 "copy": true, 00:10:21.440 "nvme_iov_md": false 00:10:21.440 }, 00:10:21.440 "memory_domains": [ 00:10:21.440 { 00:10:21.440 "dma_device_id": "system", 00:10:21.440 "dma_device_type": 1 00:10:21.440 }, 00:10:21.440 { 00:10:21.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.440 "dma_device_type": 2 00:10:21.440 } 00:10:21.440 ], 00:10:21.440 "driver_specific": {} 00:10:21.440 } 00:10:21.440 ] 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.440 13:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.698 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.698 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.698 "name": "Existed_Raid", 00:10:21.698 "uuid": "cbc36a8d-246c-417e-8c51-f856a6f07556", 00:10:21.698 "strip_size_kb": 64, 00:10:21.698 "state": "online", 00:10:21.698 "raid_level": "concat", 00:10:21.698 "superblock": false, 00:10:21.698 "num_base_bdevs": 4, 00:10:21.698 "num_base_bdevs_discovered": 4, 00:10:21.698 "num_base_bdevs_operational": 4, 00:10:21.698 "base_bdevs_list": [ 00:10:21.698 { 00:10:21.698 "name": "NewBaseBdev", 00:10:21.698 "uuid": "a3af98f8-e19d-454e-ab64-e9d7548bc469", 00:10:21.698 "is_configured": true, 00:10:21.698 "data_offset": 0, 00:10:21.698 "data_size": 65536 00:10:21.698 }, 00:10:21.698 { 00:10:21.698 "name": "BaseBdev2", 00:10:21.698 "uuid": "ab0fdcd8-8714-4234-8d71-9dfc1f490cb0", 00:10:21.698 "is_configured": true, 00:10:21.698 "data_offset": 0, 00:10:21.698 "data_size": 65536 00:10:21.698 }, 00:10:21.698 { 00:10:21.699 "name": "BaseBdev3", 00:10:21.699 "uuid": "e53fd995-0c81-47e3-9023-c81258c4070e", 00:10:21.699 "is_configured": true, 00:10:21.699 "data_offset": 0, 00:10:21.699 "data_size": 65536 00:10:21.699 }, 00:10:21.699 { 00:10:21.699 "name": "BaseBdev4", 00:10:21.699 "uuid": "835d9e70-d5c9-41ed-b86b-9b69897723a6", 00:10:21.699 "is_configured": true, 00:10:21.699 "data_offset": 0, 00:10:21.699 "data_size": 65536 00:10:21.699 } 00:10:21.699 ] 00:10:21.699 }' 00:10:21.699 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.699 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.958 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.958 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:21.958 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.958 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.958 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.958 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.958 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.958 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:21.958 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.958 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.958 [2024-11-26 13:23:10.496744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.958 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.217 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.217 "name": "Existed_Raid", 00:10:22.217 "aliases": [ 00:10:22.217 "cbc36a8d-246c-417e-8c51-f856a6f07556" 00:10:22.217 ], 00:10:22.217 "product_name": "Raid Volume", 00:10:22.217 "block_size": 512, 00:10:22.217 "num_blocks": 262144, 00:10:22.217 "uuid": "cbc36a8d-246c-417e-8c51-f856a6f07556", 00:10:22.217 "assigned_rate_limits": { 00:10:22.217 "rw_ios_per_sec": 0, 00:10:22.217 "rw_mbytes_per_sec": 0, 00:10:22.217 "r_mbytes_per_sec": 0, 00:10:22.217 "w_mbytes_per_sec": 0 00:10:22.217 }, 00:10:22.217 "claimed": false, 00:10:22.217 "zoned": false, 00:10:22.217 "supported_io_types": { 00:10:22.217 "read": true, 00:10:22.217 "write": true, 00:10:22.217 "unmap": true, 00:10:22.217 "flush": true, 00:10:22.217 "reset": true, 00:10:22.217 "nvme_admin": false, 00:10:22.217 "nvme_io": false, 00:10:22.217 "nvme_io_md": false, 00:10:22.217 "write_zeroes": true, 00:10:22.217 "zcopy": false, 00:10:22.217 "get_zone_info": false, 00:10:22.217 "zone_management": false, 00:10:22.217 "zone_append": false, 00:10:22.217 "compare": false, 00:10:22.217 "compare_and_write": false, 00:10:22.217 "abort": false, 00:10:22.217 "seek_hole": false, 00:10:22.217 "seek_data": false, 00:10:22.217 "copy": false, 00:10:22.217 "nvme_iov_md": false 00:10:22.217 }, 00:10:22.217 "memory_domains": [ 00:10:22.217 { 00:10:22.217 "dma_device_id": "system", 00:10:22.217 "dma_device_type": 1 00:10:22.217 }, 00:10:22.217 { 00:10:22.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.217 "dma_device_type": 2 00:10:22.217 }, 00:10:22.217 { 00:10:22.217 "dma_device_id": "system", 00:10:22.217 "dma_device_type": 1 00:10:22.217 }, 00:10:22.217 { 00:10:22.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.217 "dma_device_type": 2 00:10:22.217 }, 00:10:22.217 { 00:10:22.217 "dma_device_id": "system", 00:10:22.217 "dma_device_type": 1 00:10:22.217 }, 00:10:22.217 { 00:10:22.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.217 "dma_device_type": 2 00:10:22.217 }, 00:10:22.217 { 00:10:22.217 "dma_device_id": "system", 00:10:22.217 "dma_device_type": 1 00:10:22.217 }, 00:10:22.217 { 00:10:22.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.217 "dma_device_type": 2 00:10:22.217 } 00:10:22.217 ], 00:10:22.217 "driver_specific": { 00:10:22.217 "raid": { 00:10:22.217 "uuid": "cbc36a8d-246c-417e-8c51-f856a6f07556", 00:10:22.217 "strip_size_kb": 64, 00:10:22.217 "state": "online", 00:10:22.217 "raid_level": "concat", 00:10:22.217 "superblock": false, 00:10:22.217 "num_base_bdevs": 4, 00:10:22.217 "num_base_bdevs_discovered": 4, 00:10:22.217 "num_base_bdevs_operational": 4, 00:10:22.217 "base_bdevs_list": [ 00:10:22.217 { 00:10:22.217 "name": "NewBaseBdev", 00:10:22.217 "uuid": "a3af98f8-e19d-454e-ab64-e9d7548bc469", 00:10:22.217 "is_configured": true, 00:10:22.217 "data_offset": 0, 00:10:22.217 "data_size": 65536 00:10:22.217 }, 00:10:22.217 { 00:10:22.217 "name": "BaseBdev2", 00:10:22.217 "uuid": "ab0fdcd8-8714-4234-8d71-9dfc1f490cb0", 00:10:22.217 "is_configured": true, 00:10:22.217 "data_offset": 0, 00:10:22.217 "data_size": 65536 00:10:22.217 }, 00:10:22.217 { 00:10:22.217 "name": "BaseBdev3", 00:10:22.217 "uuid": "e53fd995-0c81-47e3-9023-c81258c4070e", 00:10:22.217 "is_configured": true, 00:10:22.217 "data_offset": 0, 00:10:22.217 "data_size": 65536 00:10:22.217 }, 00:10:22.217 { 00:10:22.217 "name": "BaseBdev4", 00:10:22.218 "uuid": "835d9e70-d5c9-41ed-b86b-9b69897723a6", 00:10:22.218 "is_configured": true, 00:10:22.218 "data_offset": 0, 00:10:22.218 "data_size": 65536 00:10:22.218 } 00:10:22.218 ] 00:10:22.218 } 00:10:22.218 } 00:10:22.218 }' 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:22.218 BaseBdev2 00:10:22.218 BaseBdev3 00:10:22.218 BaseBdev4' 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.218 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.477 [2024-11-26 13:23:10.864473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.477 [2024-11-26 13:23:10.864502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.477 [2024-11-26 13:23:10.864568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.477 [2024-11-26 13:23:10.864633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.477 [2024-11-26 13:23:10.864646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 70821 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 70821 ']' 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 70821 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70821 00:10:22.477 killing process with pid 70821 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70821' 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 70821 00:10:22.477 [2024-11-26 13:23:10.904512] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.477 13:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 70821 00:10:22.736 [2024-11-26 13:23:11.169103] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:23.674 00:10:23.674 real 0m12.259s 00:10:23.674 user 0m20.702s 00:10:23.674 sys 0m1.728s 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.674 ************************************ 00:10:23.674 END TEST raid_state_function_test 00:10:23.674 ************************************ 00:10:23.674 13:23:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:23.674 13:23:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:23.674 13:23:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.674 13:23:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.674 ************************************ 00:10:23.674 START TEST raid_state_function_test_sb 00:10:23.674 ************************************ 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.674 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71498 00:10:23.675 Process raid pid: 71498 00:10:23.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71498' 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71498 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71498 ']' 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.675 13:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.675 [2024-11-26 13:23:12.204145] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:10:23.675 [2024-11-26 13:23:12.204782] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.934 [2024-11-26 13:23:12.395708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.193 [2024-11-26 13:23:12.541655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.193 [2024-11-26 13:23:12.732537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.193 [2024-11-26 13:23:12.732835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.761 [2024-11-26 13:23:13.150933] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.761 [2024-11-26 13:23:13.150994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.761 [2024-11-26 13:23:13.151010] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.761 [2024-11-26 13:23:13.151025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.761 [2024-11-26 13:23:13.151033] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:24.761 [2024-11-26 13:23:13.151045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:24.761 [2024-11-26 13:23:13.151053] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:24.761 [2024-11-26 13:23:13.151065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.761 "name": "Existed_Raid", 00:10:24.761 "uuid": "4b6b7fd3-044a-4591-955e-8fc30a82a0a4", 00:10:24.761 "strip_size_kb": 64, 00:10:24.761 "state": "configuring", 00:10:24.761 "raid_level": "concat", 00:10:24.761 "superblock": true, 00:10:24.761 "num_base_bdevs": 4, 00:10:24.761 "num_base_bdevs_discovered": 0, 00:10:24.761 "num_base_bdevs_operational": 4, 00:10:24.761 "base_bdevs_list": [ 00:10:24.761 { 00:10:24.761 "name": "BaseBdev1", 00:10:24.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.761 "is_configured": false, 00:10:24.761 "data_offset": 0, 00:10:24.761 "data_size": 0 00:10:24.761 }, 00:10:24.761 { 00:10:24.761 "name": "BaseBdev2", 00:10:24.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.761 "is_configured": false, 00:10:24.761 "data_offset": 0, 00:10:24.761 "data_size": 0 00:10:24.761 }, 00:10:24.761 { 00:10:24.761 "name": "BaseBdev3", 00:10:24.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.761 "is_configured": false, 00:10:24.761 "data_offset": 0, 00:10:24.761 "data_size": 0 00:10:24.761 }, 00:10:24.761 { 00:10:24.761 "name": "BaseBdev4", 00:10:24.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.761 "is_configured": false, 00:10:24.761 "data_offset": 0, 00:10:24.761 "data_size": 0 00:10:24.761 } 00:10:24.761 ] 00:10:24.761 }' 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.761 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.330 [2024-11-26 13:23:13.630952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.330 [2024-11-26 13:23:13.631116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.330 [2024-11-26 13:23:13.638969] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.330 [2024-11-26 13:23:13.639010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.330 [2024-11-26 13:23:13.639023] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.330 [2024-11-26 13:23:13.639037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.330 [2024-11-26 13:23:13.639045] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.330 [2024-11-26 13:23:13.639057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.330 [2024-11-26 13:23:13.639064] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.330 [2024-11-26 13:23:13.639077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.330 [2024-11-26 13:23:13.681182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.330 BaseBdev1 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.330 [ 00:10:25.330 { 00:10:25.330 "name": "BaseBdev1", 00:10:25.330 "aliases": [ 00:10:25.330 "d59791ee-e8b8-4837-b4e0-39ab06463333" 00:10:25.330 ], 00:10:25.330 "product_name": "Malloc disk", 00:10:25.330 "block_size": 512, 00:10:25.330 "num_blocks": 65536, 00:10:25.330 "uuid": "d59791ee-e8b8-4837-b4e0-39ab06463333", 00:10:25.330 "assigned_rate_limits": { 00:10:25.330 "rw_ios_per_sec": 0, 00:10:25.330 "rw_mbytes_per_sec": 0, 00:10:25.330 "r_mbytes_per_sec": 0, 00:10:25.330 "w_mbytes_per_sec": 0 00:10:25.330 }, 00:10:25.330 "claimed": true, 00:10:25.330 "claim_type": "exclusive_write", 00:10:25.330 "zoned": false, 00:10:25.330 "supported_io_types": { 00:10:25.330 "read": true, 00:10:25.330 "write": true, 00:10:25.330 "unmap": true, 00:10:25.330 "flush": true, 00:10:25.330 "reset": true, 00:10:25.330 "nvme_admin": false, 00:10:25.330 "nvme_io": false, 00:10:25.330 "nvme_io_md": false, 00:10:25.330 "write_zeroes": true, 00:10:25.330 "zcopy": true, 00:10:25.330 "get_zone_info": false, 00:10:25.330 "zone_management": false, 00:10:25.330 "zone_append": false, 00:10:25.330 "compare": false, 00:10:25.330 "compare_and_write": false, 00:10:25.330 "abort": true, 00:10:25.330 "seek_hole": false, 00:10:25.330 "seek_data": false, 00:10:25.330 "copy": true, 00:10:25.330 "nvme_iov_md": false 00:10:25.330 }, 00:10:25.330 "memory_domains": [ 00:10:25.330 { 00:10:25.330 "dma_device_id": "system", 00:10:25.330 "dma_device_type": 1 00:10:25.330 }, 00:10:25.330 { 00:10:25.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.330 "dma_device_type": 2 00:10:25.330 } 00:10:25.330 ], 00:10:25.330 "driver_specific": {} 00:10:25.330 } 00:10:25.330 ] 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.330 "name": "Existed_Raid", 00:10:25.330 "uuid": "7b783fb0-42b5-4bf3-a995-befc3eae739a", 00:10:25.330 "strip_size_kb": 64, 00:10:25.330 "state": "configuring", 00:10:25.330 "raid_level": "concat", 00:10:25.330 "superblock": true, 00:10:25.330 "num_base_bdevs": 4, 00:10:25.330 "num_base_bdevs_discovered": 1, 00:10:25.330 "num_base_bdevs_operational": 4, 00:10:25.330 "base_bdevs_list": [ 00:10:25.330 { 00:10:25.330 "name": "BaseBdev1", 00:10:25.330 "uuid": "d59791ee-e8b8-4837-b4e0-39ab06463333", 00:10:25.330 "is_configured": true, 00:10:25.330 "data_offset": 2048, 00:10:25.330 "data_size": 63488 00:10:25.330 }, 00:10:25.330 { 00:10:25.330 "name": "BaseBdev2", 00:10:25.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.330 "is_configured": false, 00:10:25.330 "data_offset": 0, 00:10:25.330 "data_size": 0 00:10:25.330 }, 00:10:25.330 { 00:10:25.330 "name": "BaseBdev3", 00:10:25.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.330 "is_configured": false, 00:10:25.330 "data_offset": 0, 00:10:25.330 "data_size": 0 00:10:25.330 }, 00:10:25.330 { 00:10:25.330 "name": "BaseBdev4", 00:10:25.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.330 "is_configured": false, 00:10:25.330 "data_offset": 0, 00:10:25.330 "data_size": 0 00:10:25.330 } 00:10:25.330 ] 00:10:25.330 }' 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.330 13:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.930 [2024-11-26 13:23:14.197317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.930 [2024-11-26 13:23:14.197360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.930 [2024-11-26 13:23:14.205381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.930 [2024-11-26 13:23:14.207683] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.930 [2024-11-26 13:23:14.207731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.930 [2024-11-26 13:23:14.207745] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.930 [2024-11-26 13:23:14.207760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.930 [2024-11-26 13:23:14.207769] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.930 [2024-11-26 13:23:14.207781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.930 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.931 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.931 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.931 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.931 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.931 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.931 "name": "Existed_Raid", 00:10:25.931 "uuid": "a6a95e92-84b1-45cc-9dcf-22e067c9ee35", 00:10:25.931 "strip_size_kb": 64, 00:10:25.931 "state": "configuring", 00:10:25.931 "raid_level": "concat", 00:10:25.931 "superblock": true, 00:10:25.931 "num_base_bdevs": 4, 00:10:25.931 "num_base_bdevs_discovered": 1, 00:10:25.931 "num_base_bdevs_operational": 4, 00:10:25.931 "base_bdevs_list": [ 00:10:25.931 { 00:10:25.931 "name": "BaseBdev1", 00:10:25.931 "uuid": "d59791ee-e8b8-4837-b4e0-39ab06463333", 00:10:25.931 "is_configured": true, 00:10:25.931 "data_offset": 2048, 00:10:25.931 "data_size": 63488 00:10:25.931 }, 00:10:25.931 { 00:10:25.931 "name": "BaseBdev2", 00:10:25.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.931 "is_configured": false, 00:10:25.931 "data_offset": 0, 00:10:25.931 "data_size": 0 00:10:25.931 }, 00:10:25.931 { 00:10:25.931 "name": "BaseBdev3", 00:10:25.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.931 "is_configured": false, 00:10:25.931 "data_offset": 0, 00:10:25.931 "data_size": 0 00:10:25.931 }, 00:10:25.931 { 00:10:25.931 "name": "BaseBdev4", 00:10:25.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.931 "is_configured": false, 00:10:25.931 "data_offset": 0, 00:10:25.931 "data_size": 0 00:10:25.931 } 00:10:25.931 ] 00:10:25.931 }' 00:10:25.931 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.931 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.204 [2024-11-26 13:23:14.749521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.204 BaseBdev2 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.204 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.473 [ 00:10:26.473 { 00:10:26.473 "name": "BaseBdev2", 00:10:26.473 "aliases": [ 00:10:26.473 "4261033f-e2e8-43a8-b185-a32d1dc20950" 00:10:26.473 ], 00:10:26.473 "product_name": "Malloc disk", 00:10:26.473 "block_size": 512, 00:10:26.473 "num_blocks": 65536, 00:10:26.473 "uuid": "4261033f-e2e8-43a8-b185-a32d1dc20950", 00:10:26.473 "assigned_rate_limits": { 00:10:26.473 "rw_ios_per_sec": 0, 00:10:26.473 "rw_mbytes_per_sec": 0, 00:10:26.473 "r_mbytes_per_sec": 0, 00:10:26.473 "w_mbytes_per_sec": 0 00:10:26.473 }, 00:10:26.473 "claimed": true, 00:10:26.473 "claim_type": "exclusive_write", 00:10:26.473 "zoned": false, 00:10:26.473 "supported_io_types": { 00:10:26.473 "read": true, 00:10:26.473 "write": true, 00:10:26.473 "unmap": true, 00:10:26.473 "flush": true, 00:10:26.473 "reset": true, 00:10:26.473 "nvme_admin": false, 00:10:26.473 "nvme_io": false, 00:10:26.473 "nvme_io_md": false, 00:10:26.473 "write_zeroes": true, 00:10:26.473 "zcopy": true, 00:10:26.473 "get_zone_info": false, 00:10:26.473 "zone_management": false, 00:10:26.473 "zone_append": false, 00:10:26.473 "compare": false, 00:10:26.473 "compare_and_write": false, 00:10:26.473 "abort": true, 00:10:26.473 "seek_hole": false, 00:10:26.473 "seek_data": false, 00:10:26.473 "copy": true, 00:10:26.473 "nvme_iov_md": false 00:10:26.473 }, 00:10:26.473 "memory_domains": [ 00:10:26.473 { 00:10:26.473 "dma_device_id": "system", 00:10:26.473 "dma_device_type": 1 00:10:26.473 }, 00:10:26.473 { 00:10:26.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.473 "dma_device_type": 2 00:10:26.473 } 00:10:26.473 ], 00:10:26.473 "driver_specific": {} 00:10:26.473 } 00:10:26.473 ] 00:10:26.473 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.473 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:26.473 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.473 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.473 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:26.473 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.473 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.473 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.473 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.473 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.473 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.473 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.473 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.473 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.473 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.474 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.474 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.474 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.474 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.474 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.474 "name": "Existed_Raid", 00:10:26.474 "uuid": "a6a95e92-84b1-45cc-9dcf-22e067c9ee35", 00:10:26.474 "strip_size_kb": 64, 00:10:26.474 "state": "configuring", 00:10:26.474 "raid_level": "concat", 00:10:26.474 "superblock": true, 00:10:26.474 "num_base_bdevs": 4, 00:10:26.474 "num_base_bdevs_discovered": 2, 00:10:26.474 "num_base_bdevs_operational": 4, 00:10:26.474 "base_bdevs_list": [ 00:10:26.474 { 00:10:26.474 "name": "BaseBdev1", 00:10:26.474 "uuid": "d59791ee-e8b8-4837-b4e0-39ab06463333", 00:10:26.474 "is_configured": true, 00:10:26.474 "data_offset": 2048, 00:10:26.474 "data_size": 63488 00:10:26.474 }, 00:10:26.474 { 00:10:26.474 "name": "BaseBdev2", 00:10:26.474 "uuid": "4261033f-e2e8-43a8-b185-a32d1dc20950", 00:10:26.474 "is_configured": true, 00:10:26.474 "data_offset": 2048, 00:10:26.474 "data_size": 63488 00:10:26.474 }, 00:10:26.474 { 00:10:26.474 "name": "BaseBdev3", 00:10:26.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.474 "is_configured": false, 00:10:26.474 "data_offset": 0, 00:10:26.474 "data_size": 0 00:10:26.474 }, 00:10:26.474 { 00:10:26.474 "name": "BaseBdev4", 00:10:26.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.474 "is_configured": false, 00:10:26.474 "data_offset": 0, 00:10:26.474 "data_size": 0 00:10:26.474 } 00:10:26.474 ] 00:10:26.474 }' 00:10:26.474 13:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.474 13:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.043 [2024-11-26 13:23:15.358300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.043 BaseBdev3 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.043 [ 00:10:27.043 { 00:10:27.043 "name": "BaseBdev3", 00:10:27.043 "aliases": [ 00:10:27.043 "0aa3d15a-a22c-4dd3-94c6-4b3032fa8a1e" 00:10:27.043 ], 00:10:27.043 "product_name": "Malloc disk", 00:10:27.043 "block_size": 512, 00:10:27.043 "num_blocks": 65536, 00:10:27.043 "uuid": "0aa3d15a-a22c-4dd3-94c6-4b3032fa8a1e", 00:10:27.043 "assigned_rate_limits": { 00:10:27.043 "rw_ios_per_sec": 0, 00:10:27.043 "rw_mbytes_per_sec": 0, 00:10:27.043 "r_mbytes_per_sec": 0, 00:10:27.043 "w_mbytes_per_sec": 0 00:10:27.043 }, 00:10:27.043 "claimed": true, 00:10:27.043 "claim_type": "exclusive_write", 00:10:27.043 "zoned": false, 00:10:27.043 "supported_io_types": { 00:10:27.043 "read": true, 00:10:27.043 "write": true, 00:10:27.043 "unmap": true, 00:10:27.043 "flush": true, 00:10:27.043 "reset": true, 00:10:27.043 "nvme_admin": false, 00:10:27.043 "nvme_io": false, 00:10:27.043 "nvme_io_md": false, 00:10:27.043 "write_zeroes": true, 00:10:27.043 "zcopy": true, 00:10:27.043 "get_zone_info": false, 00:10:27.043 "zone_management": false, 00:10:27.043 "zone_append": false, 00:10:27.043 "compare": false, 00:10:27.043 "compare_and_write": false, 00:10:27.043 "abort": true, 00:10:27.043 "seek_hole": false, 00:10:27.043 "seek_data": false, 00:10:27.043 "copy": true, 00:10:27.043 "nvme_iov_md": false 00:10:27.043 }, 00:10:27.043 "memory_domains": [ 00:10:27.043 { 00:10:27.043 "dma_device_id": "system", 00:10:27.043 "dma_device_type": 1 00:10:27.043 }, 00:10:27.043 { 00:10:27.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.043 "dma_device_type": 2 00:10:27.043 } 00:10:27.043 ], 00:10:27.043 "driver_specific": {} 00:10:27.043 } 00:10:27.043 ] 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.043 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.043 "name": "Existed_Raid", 00:10:27.043 "uuid": "a6a95e92-84b1-45cc-9dcf-22e067c9ee35", 00:10:27.043 "strip_size_kb": 64, 00:10:27.043 "state": "configuring", 00:10:27.043 "raid_level": "concat", 00:10:27.043 "superblock": true, 00:10:27.043 "num_base_bdevs": 4, 00:10:27.043 "num_base_bdevs_discovered": 3, 00:10:27.043 "num_base_bdevs_operational": 4, 00:10:27.043 "base_bdevs_list": [ 00:10:27.044 { 00:10:27.044 "name": "BaseBdev1", 00:10:27.044 "uuid": "d59791ee-e8b8-4837-b4e0-39ab06463333", 00:10:27.044 "is_configured": true, 00:10:27.044 "data_offset": 2048, 00:10:27.044 "data_size": 63488 00:10:27.044 }, 00:10:27.044 { 00:10:27.044 "name": "BaseBdev2", 00:10:27.044 "uuid": "4261033f-e2e8-43a8-b185-a32d1dc20950", 00:10:27.044 "is_configured": true, 00:10:27.044 "data_offset": 2048, 00:10:27.044 "data_size": 63488 00:10:27.044 }, 00:10:27.044 { 00:10:27.044 "name": "BaseBdev3", 00:10:27.044 "uuid": "0aa3d15a-a22c-4dd3-94c6-4b3032fa8a1e", 00:10:27.044 "is_configured": true, 00:10:27.044 "data_offset": 2048, 00:10:27.044 "data_size": 63488 00:10:27.044 }, 00:10:27.044 { 00:10:27.044 "name": "BaseBdev4", 00:10:27.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.044 "is_configured": false, 00:10:27.044 "data_offset": 0, 00:10:27.044 "data_size": 0 00:10:27.044 } 00:10:27.044 ] 00:10:27.044 }' 00:10:27.044 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.044 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.612 [2024-11-26 13:23:15.942895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:27.612 [2024-11-26 13:23:15.943173] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:27.612 [2024-11-26 13:23:15.943191] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:27.612 BaseBdev4 00:10:27.612 [2024-11-26 13:23:15.943504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:27.612 [2024-11-26 13:23:15.943679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:27.612 [2024-11-26 13:23:15.943705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:27.612 [2024-11-26 13:23:15.943860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.612 [ 00:10:27.612 { 00:10:27.612 "name": "BaseBdev4", 00:10:27.612 "aliases": [ 00:10:27.612 "e33fbea2-af61-4795-a9ad-164cf45f81fa" 00:10:27.612 ], 00:10:27.612 "product_name": "Malloc disk", 00:10:27.612 "block_size": 512, 00:10:27.612 "num_blocks": 65536, 00:10:27.612 "uuid": "e33fbea2-af61-4795-a9ad-164cf45f81fa", 00:10:27.612 "assigned_rate_limits": { 00:10:27.612 "rw_ios_per_sec": 0, 00:10:27.612 "rw_mbytes_per_sec": 0, 00:10:27.612 "r_mbytes_per_sec": 0, 00:10:27.612 "w_mbytes_per_sec": 0 00:10:27.612 }, 00:10:27.612 "claimed": true, 00:10:27.612 "claim_type": "exclusive_write", 00:10:27.612 "zoned": false, 00:10:27.612 "supported_io_types": { 00:10:27.612 "read": true, 00:10:27.612 "write": true, 00:10:27.612 "unmap": true, 00:10:27.612 "flush": true, 00:10:27.612 "reset": true, 00:10:27.612 "nvme_admin": false, 00:10:27.612 "nvme_io": false, 00:10:27.612 "nvme_io_md": false, 00:10:27.612 "write_zeroes": true, 00:10:27.612 "zcopy": true, 00:10:27.612 "get_zone_info": false, 00:10:27.612 "zone_management": false, 00:10:27.612 "zone_append": false, 00:10:27.612 "compare": false, 00:10:27.612 "compare_and_write": false, 00:10:27.612 "abort": true, 00:10:27.612 "seek_hole": false, 00:10:27.612 "seek_data": false, 00:10:27.612 "copy": true, 00:10:27.612 "nvme_iov_md": false 00:10:27.612 }, 00:10:27.612 "memory_domains": [ 00:10:27.612 { 00:10:27.612 "dma_device_id": "system", 00:10:27.612 "dma_device_type": 1 00:10:27.612 }, 00:10:27.612 { 00:10:27.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.612 "dma_device_type": 2 00:10:27.612 } 00:10:27.612 ], 00:10:27.612 "driver_specific": {} 00:10:27.612 } 00:10:27.612 ] 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.612 13:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.612 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.613 "name": "Existed_Raid", 00:10:27.613 "uuid": "a6a95e92-84b1-45cc-9dcf-22e067c9ee35", 00:10:27.613 "strip_size_kb": 64, 00:10:27.613 "state": "online", 00:10:27.613 "raid_level": "concat", 00:10:27.613 "superblock": true, 00:10:27.613 "num_base_bdevs": 4, 00:10:27.613 "num_base_bdevs_discovered": 4, 00:10:27.613 "num_base_bdevs_operational": 4, 00:10:27.613 "base_bdevs_list": [ 00:10:27.613 { 00:10:27.613 "name": "BaseBdev1", 00:10:27.613 "uuid": "d59791ee-e8b8-4837-b4e0-39ab06463333", 00:10:27.613 "is_configured": true, 00:10:27.613 "data_offset": 2048, 00:10:27.613 "data_size": 63488 00:10:27.613 }, 00:10:27.613 { 00:10:27.613 "name": "BaseBdev2", 00:10:27.613 "uuid": "4261033f-e2e8-43a8-b185-a32d1dc20950", 00:10:27.613 "is_configured": true, 00:10:27.613 "data_offset": 2048, 00:10:27.613 "data_size": 63488 00:10:27.613 }, 00:10:27.613 { 00:10:27.613 "name": "BaseBdev3", 00:10:27.613 "uuid": "0aa3d15a-a22c-4dd3-94c6-4b3032fa8a1e", 00:10:27.613 "is_configured": true, 00:10:27.613 "data_offset": 2048, 00:10:27.613 "data_size": 63488 00:10:27.613 }, 00:10:27.613 { 00:10:27.613 "name": "BaseBdev4", 00:10:27.613 "uuid": "e33fbea2-af61-4795-a9ad-164cf45f81fa", 00:10:27.613 "is_configured": true, 00:10:27.613 "data_offset": 2048, 00:10:27.613 "data_size": 63488 00:10:27.613 } 00:10:27.613 ] 00:10:27.613 }' 00:10:27.613 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.613 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.182 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:28.182 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:28.182 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.182 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.182 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.182 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.182 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:28.182 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.182 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.182 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.182 [2024-11-26 13:23:16.511359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.182 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.182 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:28.182 "name": "Existed_Raid", 00:10:28.182 "aliases": [ 00:10:28.182 "a6a95e92-84b1-45cc-9dcf-22e067c9ee35" 00:10:28.182 ], 00:10:28.182 "product_name": "Raid Volume", 00:10:28.182 "block_size": 512, 00:10:28.182 "num_blocks": 253952, 00:10:28.182 "uuid": "a6a95e92-84b1-45cc-9dcf-22e067c9ee35", 00:10:28.182 "assigned_rate_limits": { 00:10:28.182 "rw_ios_per_sec": 0, 00:10:28.182 "rw_mbytes_per_sec": 0, 00:10:28.182 "r_mbytes_per_sec": 0, 00:10:28.182 "w_mbytes_per_sec": 0 00:10:28.182 }, 00:10:28.182 "claimed": false, 00:10:28.182 "zoned": false, 00:10:28.182 "supported_io_types": { 00:10:28.182 "read": true, 00:10:28.182 "write": true, 00:10:28.182 "unmap": true, 00:10:28.182 "flush": true, 00:10:28.182 "reset": true, 00:10:28.182 "nvme_admin": false, 00:10:28.182 "nvme_io": false, 00:10:28.182 "nvme_io_md": false, 00:10:28.182 "write_zeroes": true, 00:10:28.182 "zcopy": false, 00:10:28.182 "get_zone_info": false, 00:10:28.182 "zone_management": false, 00:10:28.182 "zone_append": false, 00:10:28.182 "compare": false, 00:10:28.182 "compare_and_write": false, 00:10:28.182 "abort": false, 00:10:28.182 "seek_hole": false, 00:10:28.182 "seek_data": false, 00:10:28.182 "copy": false, 00:10:28.182 "nvme_iov_md": false 00:10:28.182 }, 00:10:28.182 "memory_domains": [ 00:10:28.182 { 00:10:28.183 "dma_device_id": "system", 00:10:28.183 "dma_device_type": 1 00:10:28.183 }, 00:10:28.183 { 00:10:28.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.183 "dma_device_type": 2 00:10:28.183 }, 00:10:28.183 { 00:10:28.183 "dma_device_id": "system", 00:10:28.183 "dma_device_type": 1 00:10:28.183 }, 00:10:28.183 { 00:10:28.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.183 "dma_device_type": 2 00:10:28.183 }, 00:10:28.183 { 00:10:28.183 "dma_device_id": "system", 00:10:28.183 "dma_device_type": 1 00:10:28.183 }, 00:10:28.183 { 00:10:28.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.183 "dma_device_type": 2 00:10:28.183 }, 00:10:28.183 { 00:10:28.183 "dma_device_id": "system", 00:10:28.183 "dma_device_type": 1 00:10:28.183 }, 00:10:28.183 { 00:10:28.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.183 "dma_device_type": 2 00:10:28.183 } 00:10:28.183 ], 00:10:28.183 "driver_specific": { 00:10:28.183 "raid": { 00:10:28.183 "uuid": "a6a95e92-84b1-45cc-9dcf-22e067c9ee35", 00:10:28.183 "strip_size_kb": 64, 00:10:28.183 "state": "online", 00:10:28.183 "raid_level": "concat", 00:10:28.183 "superblock": true, 00:10:28.183 "num_base_bdevs": 4, 00:10:28.183 "num_base_bdevs_discovered": 4, 00:10:28.183 "num_base_bdevs_operational": 4, 00:10:28.183 "base_bdevs_list": [ 00:10:28.183 { 00:10:28.183 "name": "BaseBdev1", 00:10:28.183 "uuid": "d59791ee-e8b8-4837-b4e0-39ab06463333", 00:10:28.183 "is_configured": true, 00:10:28.183 "data_offset": 2048, 00:10:28.183 "data_size": 63488 00:10:28.183 }, 00:10:28.183 { 00:10:28.183 "name": "BaseBdev2", 00:10:28.183 "uuid": "4261033f-e2e8-43a8-b185-a32d1dc20950", 00:10:28.183 "is_configured": true, 00:10:28.183 "data_offset": 2048, 00:10:28.183 "data_size": 63488 00:10:28.183 }, 00:10:28.183 { 00:10:28.183 "name": "BaseBdev3", 00:10:28.183 "uuid": "0aa3d15a-a22c-4dd3-94c6-4b3032fa8a1e", 00:10:28.183 "is_configured": true, 00:10:28.183 "data_offset": 2048, 00:10:28.183 "data_size": 63488 00:10:28.183 }, 00:10:28.183 { 00:10:28.183 "name": "BaseBdev4", 00:10:28.183 "uuid": "e33fbea2-af61-4795-a9ad-164cf45f81fa", 00:10:28.183 "is_configured": true, 00:10:28.183 "data_offset": 2048, 00:10:28.183 "data_size": 63488 00:10:28.183 } 00:10:28.183 ] 00:10:28.183 } 00:10:28.183 } 00:10:28.183 }' 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:28.183 BaseBdev2 00:10:28.183 BaseBdev3 00:10:28.183 BaseBdev4' 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.183 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.442 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.442 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.442 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.442 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.442 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:28.442 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.442 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.442 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.443 [2024-11-26 13:23:16.895175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.443 [2024-11-26 13:23:16.895207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.443 [2024-11-26 13:23:16.895258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.443 13:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.702 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.702 "name": "Existed_Raid", 00:10:28.702 "uuid": "a6a95e92-84b1-45cc-9dcf-22e067c9ee35", 00:10:28.702 "strip_size_kb": 64, 00:10:28.702 "state": "offline", 00:10:28.702 "raid_level": "concat", 00:10:28.702 "superblock": true, 00:10:28.702 "num_base_bdevs": 4, 00:10:28.702 "num_base_bdevs_discovered": 3, 00:10:28.702 "num_base_bdevs_operational": 3, 00:10:28.702 "base_bdevs_list": [ 00:10:28.702 { 00:10:28.702 "name": null, 00:10:28.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.702 "is_configured": false, 00:10:28.702 "data_offset": 0, 00:10:28.702 "data_size": 63488 00:10:28.702 }, 00:10:28.702 { 00:10:28.702 "name": "BaseBdev2", 00:10:28.702 "uuid": "4261033f-e2e8-43a8-b185-a32d1dc20950", 00:10:28.702 "is_configured": true, 00:10:28.702 "data_offset": 2048, 00:10:28.702 "data_size": 63488 00:10:28.702 }, 00:10:28.702 { 00:10:28.702 "name": "BaseBdev3", 00:10:28.702 "uuid": "0aa3d15a-a22c-4dd3-94c6-4b3032fa8a1e", 00:10:28.702 "is_configured": true, 00:10:28.702 "data_offset": 2048, 00:10:28.702 "data_size": 63488 00:10:28.702 }, 00:10:28.702 { 00:10:28.702 "name": "BaseBdev4", 00:10:28.702 "uuid": "e33fbea2-af61-4795-a9ad-164cf45f81fa", 00:10:28.702 "is_configured": true, 00:10:28.702 "data_offset": 2048, 00:10:28.702 "data_size": 63488 00:10:28.702 } 00:10:28.702 ] 00:10:28.702 }' 00:10:28.702 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.702 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.962 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:28.962 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.962 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.962 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.962 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.962 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.962 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.221 [2024-11-26 13:23:17.544114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.221 [2024-11-26 13:23:17.672174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.221 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.482 [2024-11-26 13:23:17.799465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:29.482 [2024-11-26 13:23:17.799520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.482 BaseBdev2 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.482 [ 00:10:29.482 { 00:10:29.482 "name": "BaseBdev2", 00:10:29.482 "aliases": [ 00:10:29.482 "ce2aba71-6e6b-4b40-af9d-6921cd7e9926" 00:10:29.482 ], 00:10:29.482 "product_name": "Malloc disk", 00:10:29.482 "block_size": 512, 00:10:29.482 "num_blocks": 65536, 00:10:29.482 "uuid": "ce2aba71-6e6b-4b40-af9d-6921cd7e9926", 00:10:29.482 "assigned_rate_limits": { 00:10:29.482 "rw_ios_per_sec": 0, 00:10:29.482 "rw_mbytes_per_sec": 0, 00:10:29.482 "r_mbytes_per_sec": 0, 00:10:29.482 "w_mbytes_per_sec": 0 00:10:29.482 }, 00:10:29.482 "claimed": false, 00:10:29.482 "zoned": false, 00:10:29.482 "supported_io_types": { 00:10:29.482 "read": true, 00:10:29.482 "write": true, 00:10:29.482 "unmap": true, 00:10:29.482 "flush": true, 00:10:29.482 "reset": true, 00:10:29.482 "nvme_admin": false, 00:10:29.482 "nvme_io": false, 00:10:29.482 "nvme_io_md": false, 00:10:29.482 "write_zeroes": true, 00:10:29.482 "zcopy": true, 00:10:29.482 "get_zone_info": false, 00:10:29.482 "zone_management": false, 00:10:29.482 "zone_append": false, 00:10:29.482 "compare": false, 00:10:29.482 "compare_and_write": false, 00:10:29.482 "abort": true, 00:10:29.482 "seek_hole": false, 00:10:29.482 "seek_data": false, 00:10:29.482 "copy": true, 00:10:29.482 "nvme_iov_md": false 00:10:29.482 }, 00:10:29.482 "memory_domains": [ 00:10:29.482 { 00:10:29.482 "dma_device_id": "system", 00:10:29.482 "dma_device_type": 1 00:10:29.482 }, 00:10:29.482 { 00:10:29.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.482 "dma_device_type": 2 00:10:29.482 } 00:10:29.482 ], 00:10:29.482 "driver_specific": {} 00:10:29.482 } 00:10:29.482 ] 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.482 13:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.482 BaseBdev3 00:10:29.482 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.482 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:29.482 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:29.482 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.482 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:29.482 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.482 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.482 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.482 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.482 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.482 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.482 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:29.482 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.482 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.742 [ 00:10:29.742 { 00:10:29.742 "name": "BaseBdev3", 00:10:29.742 "aliases": [ 00:10:29.742 "79356f26-7112-4918-9418-a3715fdcf7cd" 00:10:29.742 ], 00:10:29.742 "product_name": "Malloc disk", 00:10:29.742 "block_size": 512, 00:10:29.742 "num_blocks": 65536, 00:10:29.742 "uuid": "79356f26-7112-4918-9418-a3715fdcf7cd", 00:10:29.742 "assigned_rate_limits": { 00:10:29.742 "rw_ios_per_sec": 0, 00:10:29.742 "rw_mbytes_per_sec": 0, 00:10:29.742 "r_mbytes_per_sec": 0, 00:10:29.742 "w_mbytes_per_sec": 0 00:10:29.742 }, 00:10:29.742 "claimed": false, 00:10:29.742 "zoned": false, 00:10:29.742 "supported_io_types": { 00:10:29.742 "read": true, 00:10:29.742 "write": true, 00:10:29.742 "unmap": true, 00:10:29.742 "flush": true, 00:10:29.742 "reset": true, 00:10:29.742 "nvme_admin": false, 00:10:29.742 "nvme_io": false, 00:10:29.742 "nvme_io_md": false, 00:10:29.742 "write_zeroes": true, 00:10:29.742 "zcopy": true, 00:10:29.742 "get_zone_info": false, 00:10:29.742 "zone_management": false, 00:10:29.742 "zone_append": false, 00:10:29.742 "compare": false, 00:10:29.742 "compare_and_write": false, 00:10:29.742 "abort": true, 00:10:29.742 "seek_hole": false, 00:10:29.742 "seek_data": false, 00:10:29.742 "copy": true, 00:10:29.742 "nvme_iov_md": false 00:10:29.742 }, 00:10:29.742 "memory_domains": [ 00:10:29.742 { 00:10:29.742 "dma_device_id": "system", 00:10:29.742 "dma_device_type": 1 00:10:29.742 }, 00:10:29.742 { 00:10:29.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.742 "dma_device_type": 2 00:10:29.742 } 00:10:29.742 ], 00:10:29.742 "driver_specific": {} 00:10:29.742 } 00:10:29.742 ] 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.742 BaseBdev4 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.742 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.742 [ 00:10:29.742 { 00:10:29.742 "name": "BaseBdev4", 00:10:29.742 "aliases": [ 00:10:29.742 "69a075b7-5190-4686-b3c2-3d96b55cd1da" 00:10:29.742 ], 00:10:29.742 "product_name": "Malloc disk", 00:10:29.742 "block_size": 512, 00:10:29.742 "num_blocks": 65536, 00:10:29.742 "uuid": "69a075b7-5190-4686-b3c2-3d96b55cd1da", 00:10:29.742 "assigned_rate_limits": { 00:10:29.742 "rw_ios_per_sec": 0, 00:10:29.742 "rw_mbytes_per_sec": 0, 00:10:29.742 "r_mbytes_per_sec": 0, 00:10:29.742 "w_mbytes_per_sec": 0 00:10:29.742 }, 00:10:29.742 "claimed": false, 00:10:29.742 "zoned": false, 00:10:29.742 "supported_io_types": { 00:10:29.742 "read": true, 00:10:29.742 "write": true, 00:10:29.742 "unmap": true, 00:10:29.742 "flush": true, 00:10:29.742 "reset": true, 00:10:29.742 "nvme_admin": false, 00:10:29.742 "nvme_io": false, 00:10:29.742 "nvme_io_md": false, 00:10:29.742 "write_zeroes": true, 00:10:29.742 "zcopy": true, 00:10:29.742 "get_zone_info": false, 00:10:29.742 "zone_management": false, 00:10:29.742 "zone_append": false, 00:10:29.742 "compare": false, 00:10:29.742 "compare_and_write": false, 00:10:29.742 "abort": true, 00:10:29.742 "seek_hole": false, 00:10:29.742 "seek_data": false, 00:10:29.742 "copy": true, 00:10:29.742 "nvme_iov_md": false 00:10:29.742 }, 00:10:29.742 "memory_domains": [ 00:10:29.742 { 00:10:29.742 "dma_device_id": "system", 00:10:29.742 "dma_device_type": 1 00:10:29.742 }, 00:10:29.742 { 00:10:29.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.742 "dma_device_type": 2 00:10:29.742 } 00:10:29.742 ], 00:10:29.742 "driver_specific": {} 00:10:29.742 } 00:10:29.742 ] 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.743 [2024-11-26 13:23:18.144708] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.743 [2024-11-26 13:23:18.144776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.743 [2024-11-26 13:23:18.144807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.743 [2024-11-26 13:23:18.146934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.743 [2024-11-26 13:23:18.147734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.743 "name": "Existed_Raid", 00:10:29.743 "uuid": "aaa7d3b3-a60a-4727-839d-0bf0c7fa8303", 00:10:29.743 "strip_size_kb": 64, 00:10:29.743 "state": "configuring", 00:10:29.743 "raid_level": "concat", 00:10:29.743 "superblock": true, 00:10:29.743 "num_base_bdevs": 4, 00:10:29.743 "num_base_bdevs_discovered": 3, 00:10:29.743 "num_base_bdevs_operational": 4, 00:10:29.743 "base_bdevs_list": [ 00:10:29.743 { 00:10:29.743 "name": "BaseBdev1", 00:10:29.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.743 "is_configured": false, 00:10:29.743 "data_offset": 0, 00:10:29.743 "data_size": 0 00:10:29.743 }, 00:10:29.743 { 00:10:29.743 "name": "BaseBdev2", 00:10:29.743 "uuid": "ce2aba71-6e6b-4b40-af9d-6921cd7e9926", 00:10:29.743 "is_configured": true, 00:10:29.743 "data_offset": 2048, 00:10:29.743 "data_size": 63488 00:10:29.743 }, 00:10:29.743 { 00:10:29.743 "name": "BaseBdev3", 00:10:29.743 "uuid": "79356f26-7112-4918-9418-a3715fdcf7cd", 00:10:29.743 "is_configured": true, 00:10:29.743 "data_offset": 2048, 00:10:29.743 "data_size": 63488 00:10:29.743 }, 00:10:29.743 { 00:10:29.743 "name": "BaseBdev4", 00:10:29.743 "uuid": "69a075b7-5190-4686-b3c2-3d96b55cd1da", 00:10:29.743 "is_configured": true, 00:10:29.743 "data_offset": 2048, 00:10:29.743 "data_size": 63488 00:10:29.743 } 00:10:29.743 ] 00:10:29.743 }' 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.743 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.311 [2024-11-26 13:23:18.692831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.311 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.311 "name": "Existed_Raid", 00:10:30.311 "uuid": "aaa7d3b3-a60a-4727-839d-0bf0c7fa8303", 00:10:30.311 "strip_size_kb": 64, 00:10:30.311 "state": "configuring", 00:10:30.311 "raid_level": "concat", 00:10:30.311 "superblock": true, 00:10:30.311 "num_base_bdevs": 4, 00:10:30.311 "num_base_bdevs_discovered": 2, 00:10:30.312 "num_base_bdevs_operational": 4, 00:10:30.312 "base_bdevs_list": [ 00:10:30.312 { 00:10:30.312 "name": "BaseBdev1", 00:10:30.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.312 "is_configured": false, 00:10:30.312 "data_offset": 0, 00:10:30.312 "data_size": 0 00:10:30.312 }, 00:10:30.312 { 00:10:30.312 "name": null, 00:10:30.312 "uuid": "ce2aba71-6e6b-4b40-af9d-6921cd7e9926", 00:10:30.312 "is_configured": false, 00:10:30.312 "data_offset": 0, 00:10:30.312 "data_size": 63488 00:10:30.312 }, 00:10:30.312 { 00:10:30.312 "name": "BaseBdev3", 00:10:30.312 "uuid": "79356f26-7112-4918-9418-a3715fdcf7cd", 00:10:30.312 "is_configured": true, 00:10:30.312 "data_offset": 2048, 00:10:30.312 "data_size": 63488 00:10:30.312 }, 00:10:30.312 { 00:10:30.312 "name": "BaseBdev4", 00:10:30.312 "uuid": "69a075b7-5190-4686-b3c2-3d96b55cd1da", 00:10:30.312 "is_configured": true, 00:10:30.312 "data_offset": 2048, 00:10:30.312 "data_size": 63488 00:10:30.312 } 00:10:30.312 ] 00:10:30.312 }' 00:10:30.312 13:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.312 13:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.878 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.878 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:30.878 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.878 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.878 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.878 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:30.878 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.878 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.878 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.878 [2024-11-26 13:23:19.302891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.878 BaseBdev1 00:10:30.878 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.878 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:30.878 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:30.878 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.878 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.879 [ 00:10:30.879 { 00:10:30.879 "name": "BaseBdev1", 00:10:30.879 "aliases": [ 00:10:30.879 "1961c84f-6cdc-433c-bd7a-08fedbaee8ec" 00:10:30.879 ], 00:10:30.879 "product_name": "Malloc disk", 00:10:30.879 "block_size": 512, 00:10:30.879 "num_blocks": 65536, 00:10:30.879 "uuid": "1961c84f-6cdc-433c-bd7a-08fedbaee8ec", 00:10:30.879 "assigned_rate_limits": { 00:10:30.879 "rw_ios_per_sec": 0, 00:10:30.879 "rw_mbytes_per_sec": 0, 00:10:30.879 "r_mbytes_per_sec": 0, 00:10:30.879 "w_mbytes_per_sec": 0 00:10:30.879 }, 00:10:30.879 "claimed": true, 00:10:30.879 "claim_type": "exclusive_write", 00:10:30.879 "zoned": false, 00:10:30.879 "supported_io_types": { 00:10:30.879 "read": true, 00:10:30.879 "write": true, 00:10:30.879 "unmap": true, 00:10:30.879 "flush": true, 00:10:30.879 "reset": true, 00:10:30.879 "nvme_admin": false, 00:10:30.879 "nvme_io": false, 00:10:30.879 "nvme_io_md": false, 00:10:30.879 "write_zeroes": true, 00:10:30.879 "zcopy": true, 00:10:30.879 "get_zone_info": false, 00:10:30.879 "zone_management": false, 00:10:30.879 "zone_append": false, 00:10:30.879 "compare": false, 00:10:30.879 "compare_and_write": false, 00:10:30.879 "abort": true, 00:10:30.879 "seek_hole": false, 00:10:30.879 "seek_data": false, 00:10:30.879 "copy": true, 00:10:30.879 "nvme_iov_md": false 00:10:30.879 }, 00:10:30.879 "memory_domains": [ 00:10:30.879 { 00:10:30.879 "dma_device_id": "system", 00:10:30.879 "dma_device_type": 1 00:10:30.879 }, 00:10:30.879 { 00:10:30.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.879 "dma_device_type": 2 00:10:30.879 } 00:10:30.879 ], 00:10:30.879 "driver_specific": {} 00:10:30.879 } 00:10:30.879 ] 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.879 "name": "Existed_Raid", 00:10:30.879 "uuid": "aaa7d3b3-a60a-4727-839d-0bf0c7fa8303", 00:10:30.879 "strip_size_kb": 64, 00:10:30.879 "state": "configuring", 00:10:30.879 "raid_level": "concat", 00:10:30.879 "superblock": true, 00:10:30.879 "num_base_bdevs": 4, 00:10:30.879 "num_base_bdevs_discovered": 3, 00:10:30.879 "num_base_bdevs_operational": 4, 00:10:30.879 "base_bdevs_list": [ 00:10:30.879 { 00:10:30.879 "name": "BaseBdev1", 00:10:30.879 "uuid": "1961c84f-6cdc-433c-bd7a-08fedbaee8ec", 00:10:30.879 "is_configured": true, 00:10:30.879 "data_offset": 2048, 00:10:30.879 "data_size": 63488 00:10:30.879 }, 00:10:30.879 { 00:10:30.879 "name": null, 00:10:30.879 "uuid": "ce2aba71-6e6b-4b40-af9d-6921cd7e9926", 00:10:30.879 "is_configured": false, 00:10:30.879 "data_offset": 0, 00:10:30.879 "data_size": 63488 00:10:30.879 }, 00:10:30.879 { 00:10:30.879 "name": "BaseBdev3", 00:10:30.879 "uuid": "79356f26-7112-4918-9418-a3715fdcf7cd", 00:10:30.879 "is_configured": true, 00:10:30.879 "data_offset": 2048, 00:10:30.879 "data_size": 63488 00:10:30.879 }, 00:10:30.879 { 00:10:30.879 "name": "BaseBdev4", 00:10:30.879 "uuid": "69a075b7-5190-4686-b3c2-3d96b55cd1da", 00:10:30.879 "is_configured": true, 00:10:30.879 "data_offset": 2048, 00:10:30.879 "data_size": 63488 00:10:30.879 } 00:10:30.879 ] 00:10:30.879 }' 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.879 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.446 [2024-11-26 13:23:19.915054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.446 "name": "Existed_Raid", 00:10:31.446 "uuid": "aaa7d3b3-a60a-4727-839d-0bf0c7fa8303", 00:10:31.446 "strip_size_kb": 64, 00:10:31.446 "state": "configuring", 00:10:31.446 "raid_level": "concat", 00:10:31.446 "superblock": true, 00:10:31.446 "num_base_bdevs": 4, 00:10:31.446 "num_base_bdevs_discovered": 2, 00:10:31.446 "num_base_bdevs_operational": 4, 00:10:31.446 "base_bdevs_list": [ 00:10:31.446 { 00:10:31.446 "name": "BaseBdev1", 00:10:31.446 "uuid": "1961c84f-6cdc-433c-bd7a-08fedbaee8ec", 00:10:31.446 "is_configured": true, 00:10:31.446 "data_offset": 2048, 00:10:31.446 "data_size": 63488 00:10:31.446 }, 00:10:31.446 { 00:10:31.446 "name": null, 00:10:31.446 "uuid": "ce2aba71-6e6b-4b40-af9d-6921cd7e9926", 00:10:31.446 "is_configured": false, 00:10:31.446 "data_offset": 0, 00:10:31.446 "data_size": 63488 00:10:31.446 }, 00:10:31.446 { 00:10:31.446 "name": null, 00:10:31.446 "uuid": "79356f26-7112-4918-9418-a3715fdcf7cd", 00:10:31.446 "is_configured": false, 00:10:31.446 "data_offset": 0, 00:10:31.446 "data_size": 63488 00:10:31.446 }, 00:10:31.446 { 00:10:31.446 "name": "BaseBdev4", 00:10:31.446 "uuid": "69a075b7-5190-4686-b3c2-3d96b55cd1da", 00:10:31.446 "is_configured": true, 00:10:31.446 "data_offset": 2048, 00:10:31.446 "data_size": 63488 00:10:31.446 } 00:10:31.446 ] 00:10:31.446 }' 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.446 13:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.014 [2024-11-26 13:23:20.495193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.014 "name": "Existed_Raid", 00:10:32.014 "uuid": "aaa7d3b3-a60a-4727-839d-0bf0c7fa8303", 00:10:32.014 "strip_size_kb": 64, 00:10:32.014 "state": "configuring", 00:10:32.014 "raid_level": "concat", 00:10:32.014 "superblock": true, 00:10:32.014 "num_base_bdevs": 4, 00:10:32.014 "num_base_bdevs_discovered": 3, 00:10:32.014 "num_base_bdevs_operational": 4, 00:10:32.014 "base_bdevs_list": [ 00:10:32.014 { 00:10:32.014 "name": "BaseBdev1", 00:10:32.014 "uuid": "1961c84f-6cdc-433c-bd7a-08fedbaee8ec", 00:10:32.014 "is_configured": true, 00:10:32.014 "data_offset": 2048, 00:10:32.014 "data_size": 63488 00:10:32.014 }, 00:10:32.014 { 00:10:32.014 "name": null, 00:10:32.014 "uuid": "ce2aba71-6e6b-4b40-af9d-6921cd7e9926", 00:10:32.014 "is_configured": false, 00:10:32.014 "data_offset": 0, 00:10:32.014 "data_size": 63488 00:10:32.014 }, 00:10:32.014 { 00:10:32.014 "name": "BaseBdev3", 00:10:32.014 "uuid": "79356f26-7112-4918-9418-a3715fdcf7cd", 00:10:32.014 "is_configured": true, 00:10:32.014 "data_offset": 2048, 00:10:32.014 "data_size": 63488 00:10:32.014 }, 00:10:32.014 { 00:10:32.014 "name": "BaseBdev4", 00:10:32.014 "uuid": "69a075b7-5190-4686-b3c2-3d96b55cd1da", 00:10:32.014 "is_configured": true, 00:10:32.014 "data_offset": 2048, 00:10:32.014 "data_size": 63488 00:10:32.014 } 00:10:32.014 ] 00:10:32.014 }' 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.014 13:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.582 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.582 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.582 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.582 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.582 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.582 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:32.582 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:32.582 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.582 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.582 [2024-11-26 13:23:21.079375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.842 "name": "Existed_Raid", 00:10:32.842 "uuid": "aaa7d3b3-a60a-4727-839d-0bf0c7fa8303", 00:10:32.842 "strip_size_kb": 64, 00:10:32.842 "state": "configuring", 00:10:32.842 "raid_level": "concat", 00:10:32.842 "superblock": true, 00:10:32.842 "num_base_bdevs": 4, 00:10:32.842 "num_base_bdevs_discovered": 2, 00:10:32.842 "num_base_bdevs_operational": 4, 00:10:32.842 "base_bdevs_list": [ 00:10:32.842 { 00:10:32.842 "name": null, 00:10:32.842 "uuid": "1961c84f-6cdc-433c-bd7a-08fedbaee8ec", 00:10:32.842 "is_configured": false, 00:10:32.842 "data_offset": 0, 00:10:32.842 "data_size": 63488 00:10:32.842 }, 00:10:32.842 { 00:10:32.842 "name": null, 00:10:32.842 "uuid": "ce2aba71-6e6b-4b40-af9d-6921cd7e9926", 00:10:32.842 "is_configured": false, 00:10:32.842 "data_offset": 0, 00:10:32.842 "data_size": 63488 00:10:32.842 }, 00:10:32.842 { 00:10:32.842 "name": "BaseBdev3", 00:10:32.842 "uuid": "79356f26-7112-4918-9418-a3715fdcf7cd", 00:10:32.842 "is_configured": true, 00:10:32.842 "data_offset": 2048, 00:10:32.842 "data_size": 63488 00:10:32.842 }, 00:10:32.842 { 00:10:32.842 "name": "BaseBdev4", 00:10:32.842 "uuid": "69a075b7-5190-4686-b3c2-3d96b55cd1da", 00:10:32.842 "is_configured": true, 00:10:32.842 "data_offset": 2048, 00:10:32.842 "data_size": 63488 00:10:32.842 } 00:10:32.842 ] 00:10:32.842 }' 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.842 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.410 [2024-11-26 13:23:21.724210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.410 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.410 "name": "Existed_Raid", 00:10:33.410 "uuid": "aaa7d3b3-a60a-4727-839d-0bf0c7fa8303", 00:10:33.410 "strip_size_kb": 64, 00:10:33.410 "state": "configuring", 00:10:33.410 "raid_level": "concat", 00:10:33.410 "superblock": true, 00:10:33.410 "num_base_bdevs": 4, 00:10:33.410 "num_base_bdevs_discovered": 3, 00:10:33.410 "num_base_bdevs_operational": 4, 00:10:33.410 "base_bdevs_list": [ 00:10:33.410 { 00:10:33.410 "name": null, 00:10:33.410 "uuid": "1961c84f-6cdc-433c-bd7a-08fedbaee8ec", 00:10:33.410 "is_configured": false, 00:10:33.410 "data_offset": 0, 00:10:33.410 "data_size": 63488 00:10:33.410 }, 00:10:33.410 { 00:10:33.410 "name": "BaseBdev2", 00:10:33.410 "uuid": "ce2aba71-6e6b-4b40-af9d-6921cd7e9926", 00:10:33.410 "is_configured": true, 00:10:33.410 "data_offset": 2048, 00:10:33.410 "data_size": 63488 00:10:33.410 }, 00:10:33.410 { 00:10:33.410 "name": "BaseBdev3", 00:10:33.410 "uuid": "79356f26-7112-4918-9418-a3715fdcf7cd", 00:10:33.410 "is_configured": true, 00:10:33.411 "data_offset": 2048, 00:10:33.411 "data_size": 63488 00:10:33.411 }, 00:10:33.411 { 00:10:33.411 "name": "BaseBdev4", 00:10:33.411 "uuid": "69a075b7-5190-4686-b3c2-3d96b55cd1da", 00:10:33.411 "is_configured": true, 00:10:33.411 "data_offset": 2048, 00:10:33.411 "data_size": 63488 00:10:33.411 } 00:10:33.411 ] 00:10:33.411 }' 00:10:33.411 13:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.411 13:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1961c84f-6cdc-433c-bd7a-08fedbaee8ec 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.979 [2024-11-26 13:23:22.376996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:33.979 [2024-11-26 13:23:22.377270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:33.979 [2024-11-26 13:23:22.377287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:33.979 NewBaseBdev 00:10:33.979 [2024-11-26 13:23:22.377641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:33.979 [2024-11-26 13:23:22.377836] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:33.979 [2024-11-26 13:23:22.377858] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:33.979 [2024-11-26 13:23:22.378030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.979 [ 00:10:33.979 { 00:10:33.979 "name": "NewBaseBdev", 00:10:33.979 "aliases": [ 00:10:33.979 "1961c84f-6cdc-433c-bd7a-08fedbaee8ec" 00:10:33.979 ], 00:10:33.979 "product_name": "Malloc disk", 00:10:33.979 "block_size": 512, 00:10:33.979 "num_blocks": 65536, 00:10:33.979 "uuid": "1961c84f-6cdc-433c-bd7a-08fedbaee8ec", 00:10:33.979 "assigned_rate_limits": { 00:10:33.979 "rw_ios_per_sec": 0, 00:10:33.979 "rw_mbytes_per_sec": 0, 00:10:33.979 "r_mbytes_per_sec": 0, 00:10:33.979 "w_mbytes_per_sec": 0 00:10:33.979 }, 00:10:33.979 "claimed": true, 00:10:33.979 "claim_type": "exclusive_write", 00:10:33.979 "zoned": false, 00:10:33.979 "supported_io_types": { 00:10:33.979 "read": true, 00:10:33.979 "write": true, 00:10:33.979 "unmap": true, 00:10:33.979 "flush": true, 00:10:33.979 "reset": true, 00:10:33.979 "nvme_admin": false, 00:10:33.979 "nvme_io": false, 00:10:33.979 "nvme_io_md": false, 00:10:33.979 "write_zeroes": true, 00:10:33.979 "zcopy": true, 00:10:33.979 "get_zone_info": false, 00:10:33.979 "zone_management": false, 00:10:33.979 "zone_append": false, 00:10:33.979 "compare": false, 00:10:33.979 "compare_and_write": false, 00:10:33.979 "abort": true, 00:10:33.979 "seek_hole": false, 00:10:33.979 "seek_data": false, 00:10:33.979 "copy": true, 00:10:33.979 "nvme_iov_md": false 00:10:33.979 }, 00:10:33.979 "memory_domains": [ 00:10:33.979 { 00:10:33.979 "dma_device_id": "system", 00:10:33.979 "dma_device_type": 1 00:10:33.979 }, 00:10:33.979 { 00:10:33.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.979 "dma_device_type": 2 00:10:33.979 } 00:10:33.979 ], 00:10:33.979 "driver_specific": {} 00:10:33.979 } 00:10:33.979 ] 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:33.979 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.980 "name": "Existed_Raid", 00:10:33.980 "uuid": "aaa7d3b3-a60a-4727-839d-0bf0c7fa8303", 00:10:33.980 "strip_size_kb": 64, 00:10:33.980 "state": "online", 00:10:33.980 "raid_level": "concat", 00:10:33.980 "superblock": true, 00:10:33.980 "num_base_bdevs": 4, 00:10:33.980 "num_base_bdevs_discovered": 4, 00:10:33.980 "num_base_bdevs_operational": 4, 00:10:33.980 "base_bdevs_list": [ 00:10:33.980 { 00:10:33.980 "name": "NewBaseBdev", 00:10:33.980 "uuid": "1961c84f-6cdc-433c-bd7a-08fedbaee8ec", 00:10:33.980 "is_configured": true, 00:10:33.980 "data_offset": 2048, 00:10:33.980 "data_size": 63488 00:10:33.980 }, 00:10:33.980 { 00:10:33.980 "name": "BaseBdev2", 00:10:33.980 "uuid": "ce2aba71-6e6b-4b40-af9d-6921cd7e9926", 00:10:33.980 "is_configured": true, 00:10:33.980 "data_offset": 2048, 00:10:33.980 "data_size": 63488 00:10:33.980 }, 00:10:33.980 { 00:10:33.980 "name": "BaseBdev3", 00:10:33.980 "uuid": "79356f26-7112-4918-9418-a3715fdcf7cd", 00:10:33.980 "is_configured": true, 00:10:33.980 "data_offset": 2048, 00:10:33.980 "data_size": 63488 00:10:33.980 }, 00:10:33.980 { 00:10:33.980 "name": "BaseBdev4", 00:10:33.980 "uuid": "69a075b7-5190-4686-b3c2-3d96b55cd1da", 00:10:33.980 "is_configured": true, 00:10:33.980 "data_offset": 2048, 00:10:33.980 "data_size": 63488 00:10:33.980 } 00:10:33.980 ] 00:10:33.980 }' 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.980 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.549 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.549 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.549 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.549 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.549 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.549 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.549 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.549 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.549 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.549 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.549 [2024-11-26 13:23:22.949523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.549 13:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.549 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.549 "name": "Existed_Raid", 00:10:34.549 "aliases": [ 00:10:34.549 "aaa7d3b3-a60a-4727-839d-0bf0c7fa8303" 00:10:34.549 ], 00:10:34.549 "product_name": "Raid Volume", 00:10:34.549 "block_size": 512, 00:10:34.549 "num_blocks": 253952, 00:10:34.549 "uuid": "aaa7d3b3-a60a-4727-839d-0bf0c7fa8303", 00:10:34.549 "assigned_rate_limits": { 00:10:34.549 "rw_ios_per_sec": 0, 00:10:34.549 "rw_mbytes_per_sec": 0, 00:10:34.549 "r_mbytes_per_sec": 0, 00:10:34.549 "w_mbytes_per_sec": 0 00:10:34.549 }, 00:10:34.549 "claimed": false, 00:10:34.549 "zoned": false, 00:10:34.549 "supported_io_types": { 00:10:34.549 "read": true, 00:10:34.549 "write": true, 00:10:34.549 "unmap": true, 00:10:34.549 "flush": true, 00:10:34.549 "reset": true, 00:10:34.549 "nvme_admin": false, 00:10:34.549 "nvme_io": false, 00:10:34.549 "nvme_io_md": false, 00:10:34.549 "write_zeroes": true, 00:10:34.549 "zcopy": false, 00:10:34.549 "get_zone_info": false, 00:10:34.549 "zone_management": false, 00:10:34.549 "zone_append": false, 00:10:34.549 "compare": false, 00:10:34.549 "compare_and_write": false, 00:10:34.549 "abort": false, 00:10:34.549 "seek_hole": false, 00:10:34.549 "seek_data": false, 00:10:34.549 "copy": false, 00:10:34.549 "nvme_iov_md": false 00:10:34.549 }, 00:10:34.549 "memory_domains": [ 00:10:34.549 { 00:10:34.549 "dma_device_id": "system", 00:10:34.549 "dma_device_type": 1 00:10:34.549 }, 00:10:34.549 { 00:10:34.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.549 "dma_device_type": 2 00:10:34.549 }, 00:10:34.549 { 00:10:34.549 "dma_device_id": "system", 00:10:34.549 "dma_device_type": 1 00:10:34.549 }, 00:10:34.549 { 00:10:34.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.549 "dma_device_type": 2 00:10:34.549 }, 00:10:34.549 { 00:10:34.549 "dma_device_id": "system", 00:10:34.549 "dma_device_type": 1 00:10:34.549 }, 00:10:34.549 { 00:10:34.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.549 "dma_device_type": 2 00:10:34.549 }, 00:10:34.549 { 00:10:34.549 "dma_device_id": "system", 00:10:34.549 "dma_device_type": 1 00:10:34.549 }, 00:10:34.549 { 00:10:34.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.549 "dma_device_type": 2 00:10:34.549 } 00:10:34.549 ], 00:10:34.549 "driver_specific": { 00:10:34.549 "raid": { 00:10:34.549 "uuid": "aaa7d3b3-a60a-4727-839d-0bf0c7fa8303", 00:10:34.549 "strip_size_kb": 64, 00:10:34.549 "state": "online", 00:10:34.549 "raid_level": "concat", 00:10:34.549 "superblock": true, 00:10:34.549 "num_base_bdevs": 4, 00:10:34.549 "num_base_bdevs_discovered": 4, 00:10:34.549 "num_base_bdevs_operational": 4, 00:10:34.549 "base_bdevs_list": [ 00:10:34.549 { 00:10:34.549 "name": "NewBaseBdev", 00:10:34.549 "uuid": "1961c84f-6cdc-433c-bd7a-08fedbaee8ec", 00:10:34.549 "is_configured": true, 00:10:34.549 "data_offset": 2048, 00:10:34.549 "data_size": 63488 00:10:34.549 }, 00:10:34.549 { 00:10:34.549 "name": "BaseBdev2", 00:10:34.549 "uuid": "ce2aba71-6e6b-4b40-af9d-6921cd7e9926", 00:10:34.549 "is_configured": true, 00:10:34.549 "data_offset": 2048, 00:10:34.549 "data_size": 63488 00:10:34.549 }, 00:10:34.549 { 00:10:34.549 "name": "BaseBdev3", 00:10:34.549 "uuid": "79356f26-7112-4918-9418-a3715fdcf7cd", 00:10:34.549 "is_configured": true, 00:10:34.549 "data_offset": 2048, 00:10:34.549 "data_size": 63488 00:10:34.549 }, 00:10:34.549 { 00:10:34.549 "name": "BaseBdev4", 00:10:34.549 "uuid": "69a075b7-5190-4686-b3c2-3d96b55cd1da", 00:10:34.549 "is_configured": true, 00:10:34.549 "data_offset": 2048, 00:10:34.549 "data_size": 63488 00:10:34.549 } 00:10:34.549 ] 00:10:34.549 } 00:10:34.549 } 00:10:34.549 }' 00:10:34.549 13:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.550 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:34.550 BaseBdev2 00:10:34.550 BaseBdev3 00:10:34.550 BaseBdev4' 00:10:34.550 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.550 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.550 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.550 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:34.550 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.550 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.550 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.839 [2024-11-26 13:23:23.329221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.839 [2024-11-26 13:23:23.329265] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.839 [2024-11-26 13:23:23.329351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.839 [2024-11-26 13:23:23.329429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.839 [2024-11-26 13:23:23.329446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71498 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71498 ']' 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71498 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71498 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71498' 00:10:34.839 killing process with pid 71498 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71498 00:10:34.839 [2024-11-26 13:23:23.367400] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.839 13:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71498 00:10:35.099 [2024-11-26 13:23:23.641538] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.036 13:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:36.036 00:10:36.036 real 0m12.404s 00:10:36.036 user 0m20.852s 00:10:36.036 sys 0m1.813s 00:10:36.036 13:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.036 13:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.036 ************************************ 00:10:36.036 END TEST raid_state_function_test_sb 00:10:36.036 ************************************ 00:10:36.037 13:23:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:36.037 13:23:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:36.037 13:23:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.037 13:23:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.037 ************************************ 00:10:36.037 START TEST raid_superblock_test 00:10:36.037 ************************************ 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72179 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72179 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72179 ']' 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.037 13:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.296 [2024-11-26 13:23:24.658334] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:10:36.296 [2024-11-26 13:23:24.658551] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72179 ] 00:10:36.296 [2024-11-26 13:23:24.843283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.555 [2024-11-26 13:23:24.943375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.555 [2024-11-26 13:23:25.112256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.555 [2024-11-26 13:23:25.112319] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.123 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.123 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:37.123 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:37.123 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.123 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:37.123 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:37.123 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:37.123 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.123 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.124 malloc1 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.124 [2024-11-26 13:23:25.660421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:37.124 [2024-11-26 13:23:25.660489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.124 [2024-11-26 13:23:25.660521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:37.124 [2024-11-26 13:23:25.660535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.124 [2024-11-26 13:23:25.663011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.124 [2024-11-26 13:23:25.663052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:37.124 pt1 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.124 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.383 malloc2 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.384 [2024-11-26 13:23:25.706543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:37.384 [2024-11-26 13:23:25.706599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.384 [2024-11-26 13:23:25.706629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:37.384 [2024-11-26 13:23:25.706643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.384 [2024-11-26 13:23:25.708960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.384 [2024-11-26 13:23:25.708999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:37.384 pt2 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.384 malloc3 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.384 [2024-11-26 13:23:25.765031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:37.384 [2024-11-26 13:23:25.765085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.384 [2024-11-26 13:23:25.765114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:37.384 [2024-11-26 13:23:25.765126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.384 [2024-11-26 13:23:25.767534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.384 [2024-11-26 13:23:25.767573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:37.384 pt3 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.384 malloc4 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.384 [2024-11-26 13:23:25.810815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:37.384 [2024-11-26 13:23:25.810868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.384 [2024-11-26 13:23:25.810894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:37.384 [2024-11-26 13:23:25.810907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.384 [2024-11-26 13:23:25.813343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.384 [2024-11-26 13:23:25.813381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:37.384 pt4 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.384 [2024-11-26 13:23:25.822859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:37.384 [2024-11-26 13:23:25.825086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:37.384 [2024-11-26 13:23:25.825168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:37.384 [2024-11-26 13:23:25.825260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:37.384 [2024-11-26 13:23:25.825471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:37.384 [2024-11-26 13:23:25.825487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:37.384 [2024-11-26 13:23:25.825751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:37.384 [2024-11-26 13:23:25.825935] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:37.384 [2024-11-26 13:23:25.825956] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:37.384 [2024-11-26 13:23:25.826109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.384 "name": "raid_bdev1", 00:10:37.384 "uuid": "0a1908ac-4472-4b22-a04c-eb759c14d2ec", 00:10:37.384 "strip_size_kb": 64, 00:10:37.384 "state": "online", 00:10:37.384 "raid_level": "concat", 00:10:37.384 "superblock": true, 00:10:37.384 "num_base_bdevs": 4, 00:10:37.384 "num_base_bdevs_discovered": 4, 00:10:37.384 "num_base_bdevs_operational": 4, 00:10:37.384 "base_bdevs_list": [ 00:10:37.384 { 00:10:37.384 "name": "pt1", 00:10:37.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.384 "is_configured": true, 00:10:37.384 "data_offset": 2048, 00:10:37.384 "data_size": 63488 00:10:37.384 }, 00:10:37.384 { 00:10:37.384 "name": "pt2", 00:10:37.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.384 "is_configured": true, 00:10:37.384 "data_offset": 2048, 00:10:37.384 "data_size": 63488 00:10:37.384 }, 00:10:37.384 { 00:10:37.384 "name": "pt3", 00:10:37.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:37.384 "is_configured": true, 00:10:37.384 "data_offset": 2048, 00:10:37.384 "data_size": 63488 00:10:37.384 }, 00:10:37.384 { 00:10:37.384 "name": "pt4", 00:10:37.384 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:37.384 "is_configured": true, 00:10:37.384 "data_offset": 2048, 00:10:37.384 "data_size": 63488 00:10:37.384 } 00:10:37.384 ] 00:10:37.384 }' 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.384 13:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.953 [2024-11-26 13:23:26.327217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:37.953 "name": "raid_bdev1", 00:10:37.953 "aliases": [ 00:10:37.953 "0a1908ac-4472-4b22-a04c-eb759c14d2ec" 00:10:37.953 ], 00:10:37.953 "product_name": "Raid Volume", 00:10:37.953 "block_size": 512, 00:10:37.953 "num_blocks": 253952, 00:10:37.953 "uuid": "0a1908ac-4472-4b22-a04c-eb759c14d2ec", 00:10:37.953 "assigned_rate_limits": { 00:10:37.953 "rw_ios_per_sec": 0, 00:10:37.953 "rw_mbytes_per_sec": 0, 00:10:37.953 "r_mbytes_per_sec": 0, 00:10:37.953 "w_mbytes_per_sec": 0 00:10:37.953 }, 00:10:37.953 "claimed": false, 00:10:37.953 "zoned": false, 00:10:37.953 "supported_io_types": { 00:10:37.953 "read": true, 00:10:37.953 "write": true, 00:10:37.953 "unmap": true, 00:10:37.953 "flush": true, 00:10:37.953 "reset": true, 00:10:37.953 "nvme_admin": false, 00:10:37.953 "nvme_io": false, 00:10:37.953 "nvme_io_md": false, 00:10:37.953 "write_zeroes": true, 00:10:37.953 "zcopy": false, 00:10:37.953 "get_zone_info": false, 00:10:37.953 "zone_management": false, 00:10:37.953 "zone_append": false, 00:10:37.953 "compare": false, 00:10:37.953 "compare_and_write": false, 00:10:37.953 "abort": false, 00:10:37.953 "seek_hole": false, 00:10:37.953 "seek_data": false, 00:10:37.953 "copy": false, 00:10:37.953 "nvme_iov_md": false 00:10:37.953 }, 00:10:37.953 "memory_domains": [ 00:10:37.953 { 00:10:37.953 "dma_device_id": "system", 00:10:37.953 "dma_device_type": 1 00:10:37.953 }, 00:10:37.953 { 00:10:37.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.953 "dma_device_type": 2 00:10:37.953 }, 00:10:37.953 { 00:10:37.953 "dma_device_id": "system", 00:10:37.953 "dma_device_type": 1 00:10:37.953 }, 00:10:37.953 { 00:10:37.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.953 "dma_device_type": 2 00:10:37.953 }, 00:10:37.953 { 00:10:37.953 "dma_device_id": "system", 00:10:37.953 "dma_device_type": 1 00:10:37.953 }, 00:10:37.953 { 00:10:37.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.953 "dma_device_type": 2 00:10:37.953 }, 00:10:37.953 { 00:10:37.953 "dma_device_id": "system", 00:10:37.953 "dma_device_type": 1 00:10:37.953 }, 00:10:37.953 { 00:10:37.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.953 "dma_device_type": 2 00:10:37.953 } 00:10:37.953 ], 00:10:37.953 "driver_specific": { 00:10:37.953 "raid": { 00:10:37.953 "uuid": "0a1908ac-4472-4b22-a04c-eb759c14d2ec", 00:10:37.953 "strip_size_kb": 64, 00:10:37.953 "state": "online", 00:10:37.953 "raid_level": "concat", 00:10:37.953 "superblock": true, 00:10:37.953 "num_base_bdevs": 4, 00:10:37.953 "num_base_bdevs_discovered": 4, 00:10:37.953 "num_base_bdevs_operational": 4, 00:10:37.953 "base_bdevs_list": [ 00:10:37.953 { 00:10:37.953 "name": "pt1", 00:10:37.953 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.953 "is_configured": true, 00:10:37.953 "data_offset": 2048, 00:10:37.953 "data_size": 63488 00:10:37.953 }, 00:10:37.953 { 00:10:37.953 "name": "pt2", 00:10:37.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.953 "is_configured": true, 00:10:37.953 "data_offset": 2048, 00:10:37.953 "data_size": 63488 00:10:37.953 }, 00:10:37.953 { 00:10:37.953 "name": "pt3", 00:10:37.953 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:37.953 "is_configured": true, 00:10:37.953 "data_offset": 2048, 00:10:37.953 "data_size": 63488 00:10:37.953 }, 00:10:37.953 { 00:10:37.953 "name": "pt4", 00:10:37.953 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:37.953 "is_configured": true, 00:10:37.953 "data_offset": 2048, 00:10:37.953 "data_size": 63488 00:10:37.953 } 00:10:37.953 ] 00:10:37.953 } 00:10:37.953 } 00:10:37.953 }' 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:37.953 pt2 00:10:37.953 pt3 00:10:37.953 pt4' 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.953 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.213 [2024-11-26 13:23:26.687263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0a1908ac-4472-4b22-a04c-eb759c14d2ec 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0a1908ac-4472-4b22-a04c-eb759c14d2ec ']' 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.213 [2024-11-26 13:23:26.730999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.213 [2024-11-26 13:23:26.731024] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.213 [2024-11-26 13:23:26.731125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.213 [2024-11-26 13:23:26.731257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.213 [2024-11-26 13:23:26.731294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:38.213 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:38.473 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.474 [2024-11-26 13:23:26.883030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:38.474 [2024-11-26 13:23:26.885220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:38.474 [2024-11-26 13:23:26.885312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:38.474 [2024-11-26 13:23:26.885361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:38.474 [2024-11-26 13:23:26.885455] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:38.474 [2024-11-26 13:23:26.885519] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:38.474 [2024-11-26 13:23:26.885552] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:38.474 [2024-11-26 13:23:26.885598] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:38.474 [2024-11-26 13:23:26.885618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.474 [2024-11-26 13:23:26.885632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:38.474 request: 00:10:38.474 { 00:10:38.474 "name": "raid_bdev1", 00:10:38.474 "raid_level": "concat", 00:10:38.474 "base_bdevs": [ 00:10:38.474 "malloc1", 00:10:38.474 "malloc2", 00:10:38.474 "malloc3", 00:10:38.474 "malloc4" 00:10:38.474 ], 00:10:38.474 "strip_size_kb": 64, 00:10:38.474 "superblock": false, 00:10:38.474 "method": "bdev_raid_create", 00:10:38.474 "req_id": 1 00:10:38.474 } 00:10:38.474 Got JSON-RPC error response 00:10:38.474 response: 00:10:38.474 { 00:10:38.474 "code": -17, 00:10:38.474 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:38.474 } 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.474 [2024-11-26 13:23:26.947024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:38.474 [2024-11-26 13:23:26.947093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.474 [2024-11-26 13:23:26.947113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:38.474 [2024-11-26 13:23:26.947128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.474 [2024-11-26 13:23:26.949725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.474 [2024-11-26 13:23:26.949792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:38.474 [2024-11-26 13:23:26.949862] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:38.474 [2024-11-26 13:23:26.949928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:38.474 pt1 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.474 13:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.474 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.474 "name": "raid_bdev1", 00:10:38.474 "uuid": "0a1908ac-4472-4b22-a04c-eb759c14d2ec", 00:10:38.474 "strip_size_kb": 64, 00:10:38.474 "state": "configuring", 00:10:38.474 "raid_level": "concat", 00:10:38.474 "superblock": true, 00:10:38.474 "num_base_bdevs": 4, 00:10:38.474 "num_base_bdevs_discovered": 1, 00:10:38.474 "num_base_bdevs_operational": 4, 00:10:38.474 "base_bdevs_list": [ 00:10:38.474 { 00:10:38.474 "name": "pt1", 00:10:38.474 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.474 "is_configured": true, 00:10:38.474 "data_offset": 2048, 00:10:38.474 "data_size": 63488 00:10:38.474 }, 00:10:38.474 { 00:10:38.474 "name": null, 00:10:38.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.474 "is_configured": false, 00:10:38.474 "data_offset": 2048, 00:10:38.474 "data_size": 63488 00:10:38.474 }, 00:10:38.474 { 00:10:38.474 "name": null, 00:10:38.474 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.474 "is_configured": false, 00:10:38.474 "data_offset": 2048, 00:10:38.474 "data_size": 63488 00:10:38.474 }, 00:10:38.474 { 00:10:38.474 "name": null, 00:10:38.474 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:38.474 "is_configured": false, 00:10:38.474 "data_offset": 2048, 00:10:38.474 "data_size": 63488 00:10:38.474 } 00:10:38.474 ] 00:10:38.474 }' 00:10:38.474 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.474 13:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.043 [2024-11-26 13:23:27.479142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:39.043 [2024-11-26 13:23:27.479202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.043 [2024-11-26 13:23:27.479222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:39.043 [2024-11-26 13:23:27.479265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.043 [2024-11-26 13:23:27.479693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.043 [2024-11-26 13:23:27.479735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:39.043 [2024-11-26 13:23:27.479803] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:39.043 [2024-11-26 13:23:27.479847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:39.043 pt2 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.043 [2024-11-26 13:23:27.487168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.043 "name": "raid_bdev1", 00:10:39.043 "uuid": "0a1908ac-4472-4b22-a04c-eb759c14d2ec", 00:10:39.043 "strip_size_kb": 64, 00:10:39.043 "state": "configuring", 00:10:39.043 "raid_level": "concat", 00:10:39.043 "superblock": true, 00:10:39.043 "num_base_bdevs": 4, 00:10:39.043 "num_base_bdevs_discovered": 1, 00:10:39.043 "num_base_bdevs_operational": 4, 00:10:39.043 "base_bdevs_list": [ 00:10:39.043 { 00:10:39.043 "name": "pt1", 00:10:39.043 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.043 "is_configured": true, 00:10:39.043 "data_offset": 2048, 00:10:39.043 "data_size": 63488 00:10:39.043 }, 00:10:39.043 { 00:10:39.043 "name": null, 00:10:39.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.043 "is_configured": false, 00:10:39.043 "data_offset": 0, 00:10:39.043 "data_size": 63488 00:10:39.043 }, 00:10:39.043 { 00:10:39.043 "name": null, 00:10:39.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.043 "is_configured": false, 00:10:39.043 "data_offset": 2048, 00:10:39.043 "data_size": 63488 00:10:39.043 }, 00:10:39.043 { 00:10:39.043 "name": null, 00:10:39.043 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:39.043 "is_configured": false, 00:10:39.043 "data_offset": 2048, 00:10:39.043 "data_size": 63488 00:10:39.043 } 00:10:39.043 ] 00:10:39.043 }' 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.043 13:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.612 [2024-11-26 13:23:28.007285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:39.612 [2024-11-26 13:23:28.007366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.612 [2024-11-26 13:23:28.007392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:39.612 [2024-11-26 13:23:28.007405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.612 [2024-11-26 13:23:28.007846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.612 [2024-11-26 13:23:28.007869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:39.612 [2024-11-26 13:23:28.007938] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:39.612 [2024-11-26 13:23:28.007962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:39.612 pt2 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.612 [2024-11-26 13:23:28.015257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:39.612 [2024-11-26 13:23:28.015341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.612 [2024-11-26 13:23:28.015372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:39.612 [2024-11-26 13:23:28.015387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.612 [2024-11-26 13:23:28.015808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.612 [2024-11-26 13:23:28.015838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:39.612 [2024-11-26 13:23:28.015908] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:39.612 [2024-11-26 13:23:28.015947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:39.612 pt3 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.612 [2024-11-26 13:23:28.023237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:39.612 [2024-11-26 13:23:28.023299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.612 [2024-11-26 13:23:28.023324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:39.612 [2024-11-26 13:23:28.023335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.612 [2024-11-26 13:23:28.023745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.612 [2024-11-26 13:23:28.023776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:39.612 [2024-11-26 13:23:28.024067] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:39.612 [2024-11-26 13:23:28.024092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:39.612 [2024-11-26 13:23:28.024262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:39.612 [2024-11-26 13:23:28.024278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:39.612 [2024-11-26 13:23:28.024537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:39.612 [2024-11-26 13:23:28.024713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:39.612 [2024-11-26 13:23:28.024732] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:39.612 [2024-11-26 13:23:28.024863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.612 pt4 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.612 "name": "raid_bdev1", 00:10:39.612 "uuid": "0a1908ac-4472-4b22-a04c-eb759c14d2ec", 00:10:39.612 "strip_size_kb": 64, 00:10:39.612 "state": "online", 00:10:39.612 "raid_level": "concat", 00:10:39.612 "superblock": true, 00:10:39.612 "num_base_bdevs": 4, 00:10:39.612 "num_base_bdevs_discovered": 4, 00:10:39.612 "num_base_bdevs_operational": 4, 00:10:39.612 "base_bdevs_list": [ 00:10:39.612 { 00:10:39.612 "name": "pt1", 00:10:39.612 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.612 "is_configured": true, 00:10:39.612 "data_offset": 2048, 00:10:39.612 "data_size": 63488 00:10:39.612 }, 00:10:39.612 { 00:10:39.612 "name": "pt2", 00:10:39.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.612 "is_configured": true, 00:10:39.612 "data_offset": 2048, 00:10:39.612 "data_size": 63488 00:10:39.612 }, 00:10:39.612 { 00:10:39.612 "name": "pt3", 00:10:39.612 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.612 "is_configured": true, 00:10:39.612 "data_offset": 2048, 00:10:39.612 "data_size": 63488 00:10:39.612 }, 00:10:39.612 { 00:10:39.612 "name": "pt4", 00:10:39.612 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:39.612 "is_configured": true, 00:10:39.612 "data_offset": 2048, 00:10:39.612 "data_size": 63488 00:10:39.612 } 00:10:39.612 ] 00:10:39.612 }' 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.612 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.181 [2024-11-26 13:23:28.543733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.181 "name": "raid_bdev1", 00:10:40.181 "aliases": [ 00:10:40.181 "0a1908ac-4472-4b22-a04c-eb759c14d2ec" 00:10:40.181 ], 00:10:40.181 "product_name": "Raid Volume", 00:10:40.181 "block_size": 512, 00:10:40.181 "num_blocks": 253952, 00:10:40.181 "uuid": "0a1908ac-4472-4b22-a04c-eb759c14d2ec", 00:10:40.181 "assigned_rate_limits": { 00:10:40.181 "rw_ios_per_sec": 0, 00:10:40.181 "rw_mbytes_per_sec": 0, 00:10:40.181 "r_mbytes_per_sec": 0, 00:10:40.181 "w_mbytes_per_sec": 0 00:10:40.181 }, 00:10:40.181 "claimed": false, 00:10:40.181 "zoned": false, 00:10:40.181 "supported_io_types": { 00:10:40.181 "read": true, 00:10:40.181 "write": true, 00:10:40.181 "unmap": true, 00:10:40.181 "flush": true, 00:10:40.181 "reset": true, 00:10:40.181 "nvme_admin": false, 00:10:40.181 "nvme_io": false, 00:10:40.181 "nvme_io_md": false, 00:10:40.181 "write_zeroes": true, 00:10:40.181 "zcopy": false, 00:10:40.181 "get_zone_info": false, 00:10:40.181 "zone_management": false, 00:10:40.181 "zone_append": false, 00:10:40.181 "compare": false, 00:10:40.181 "compare_and_write": false, 00:10:40.181 "abort": false, 00:10:40.181 "seek_hole": false, 00:10:40.181 "seek_data": false, 00:10:40.181 "copy": false, 00:10:40.181 "nvme_iov_md": false 00:10:40.181 }, 00:10:40.181 "memory_domains": [ 00:10:40.181 { 00:10:40.181 "dma_device_id": "system", 00:10:40.181 "dma_device_type": 1 00:10:40.181 }, 00:10:40.181 { 00:10:40.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.181 "dma_device_type": 2 00:10:40.181 }, 00:10:40.181 { 00:10:40.181 "dma_device_id": "system", 00:10:40.181 "dma_device_type": 1 00:10:40.181 }, 00:10:40.181 { 00:10:40.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.181 "dma_device_type": 2 00:10:40.181 }, 00:10:40.181 { 00:10:40.181 "dma_device_id": "system", 00:10:40.181 "dma_device_type": 1 00:10:40.181 }, 00:10:40.181 { 00:10:40.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.181 "dma_device_type": 2 00:10:40.181 }, 00:10:40.181 { 00:10:40.181 "dma_device_id": "system", 00:10:40.181 "dma_device_type": 1 00:10:40.181 }, 00:10:40.181 { 00:10:40.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.181 "dma_device_type": 2 00:10:40.181 } 00:10:40.181 ], 00:10:40.181 "driver_specific": { 00:10:40.181 "raid": { 00:10:40.181 "uuid": "0a1908ac-4472-4b22-a04c-eb759c14d2ec", 00:10:40.181 "strip_size_kb": 64, 00:10:40.181 "state": "online", 00:10:40.181 "raid_level": "concat", 00:10:40.181 "superblock": true, 00:10:40.181 "num_base_bdevs": 4, 00:10:40.181 "num_base_bdevs_discovered": 4, 00:10:40.181 "num_base_bdevs_operational": 4, 00:10:40.181 "base_bdevs_list": [ 00:10:40.181 { 00:10:40.181 "name": "pt1", 00:10:40.181 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:40.181 "is_configured": true, 00:10:40.181 "data_offset": 2048, 00:10:40.181 "data_size": 63488 00:10:40.181 }, 00:10:40.181 { 00:10:40.181 "name": "pt2", 00:10:40.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.181 "is_configured": true, 00:10:40.181 "data_offset": 2048, 00:10:40.181 "data_size": 63488 00:10:40.181 }, 00:10:40.181 { 00:10:40.181 "name": "pt3", 00:10:40.181 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:40.181 "is_configured": true, 00:10:40.181 "data_offset": 2048, 00:10:40.181 "data_size": 63488 00:10:40.181 }, 00:10:40.181 { 00:10:40.181 "name": "pt4", 00:10:40.181 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:40.181 "is_configured": true, 00:10:40.181 "data_offset": 2048, 00:10:40.181 "data_size": 63488 00:10:40.181 } 00:10:40.181 ] 00:10:40.181 } 00:10:40.181 } 00:10:40.181 }' 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:40.181 pt2 00:10:40.181 pt3 00:10:40.181 pt4' 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.181 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:40.441 [2024-11-26 13:23:28.915802] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0a1908ac-4472-4b22-a04c-eb759c14d2ec '!=' 0a1908ac-4472-4b22-a04c-eb759c14d2ec ']' 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72179 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72179 ']' 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72179 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72179 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:40.441 killing process with pid 72179 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72179' 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72179 00:10:40.441 [2024-11-26 13:23:28.988250] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:40.441 [2024-11-26 13:23:28.988310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.441 13:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72179 00:10:40.441 [2024-11-26 13:23:28.988398] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.441 [2024-11-26 13:23:28.988413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:40.699 [2024-11-26 13:23:29.259729] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:41.637 13:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:41.637 00:10:41.637 real 0m5.555s 00:10:41.637 user 0m8.499s 00:10:41.637 sys 0m0.909s 00:10:41.637 13:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.637 13:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.637 ************************************ 00:10:41.637 END TEST raid_superblock_test 00:10:41.637 ************************************ 00:10:41.637 13:23:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:41.637 13:23:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:41.637 13:23:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.637 13:23:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:41.637 ************************************ 00:10:41.637 START TEST raid_read_error_test 00:10:41.637 ************************************ 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7mw7C3iiWY 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72440 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72440 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72440 ']' 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.637 13:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.896 [2024-11-26 13:23:30.282374] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:10:41.896 [2024-11-26 13:23:30.282541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72440 ] 00:10:42.155 [2024-11-26 13:23:30.461609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.155 [2024-11-26 13:23:30.563834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.414 [2024-11-26 13:23:30.732046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.414 [2024-11-26 13:23:30.732111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.982 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.982 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:42.982 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:42.982 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 BaseBdev1_malloc 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 true 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 [2024-11-26 13:23:31.294543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:42.983 [2024-11-26 13:23:31.294629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.983 [2024-11-26 13:23:31.294687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:42.983 [2024-11-26 13:23:31.294704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.983 [2024-11-26 13:23:31.297438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.983 [2024-11-26 13:23:31.297503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:42.983 BaseBdev1 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 BaseBdev2_malloc 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 true 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 [2024-11-26 13:23:31.345343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:42.983 [2024-11-26 13:23:31.345417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.983 [2024-11-26 13:23:31.345440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:42.983 [2024-11-26 13:23:31.345456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.983 [2024-11-26 13:23:31.347942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.983 [2024-11-26 13:23:31.347991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:42.983 BaseBdev2 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 BaseBdev3_malloc 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 true 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 [2024-11-26 13:23:31.407164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:42.983 [2024-11-26 13:23:31.407220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.983 [2024-11-26 13:23:31.407271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:42.983 [2024-11-26 13:23:31.407290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.983 [2024-11-26 13:23:31.409683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.983 [2024-11-26 13:23:31.409731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:42.983 BaseBdev3 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 BaseBdev4_malloc 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 true 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 [2024-11-26 13:23:31.457700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:42.983 [2024-11-26 13:23:31.457754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.983 [2024-11-26 13:23:31.457776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:42.983 [2024-11-26 13:23:31.457791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.983 [2024-11-26 13:23:31.460364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.983 [2024-11-26 13:23:31.460413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:42.983 BaseBdev4 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 [2024-11-26 13:23:31.465766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.983 [2024-11-26 13:23:31.467877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.983 [2024-11-26 13:23:31.467977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.984 [2024-11-26 13:23:31.468063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:42.984 [2024-11-26 13:23:31.468366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:42.984 [2024-11-26 13:23:31.468388] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:42.984 [2024-11-26 13:23:31.468696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:42.984 [2024-11-26 13:23:31.468897] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:42.984 [2024-11-26 13:23:31.468916] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:42.984 [2024-11-26 13:23:31.469117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.984 "name": "raid_bdev1", 00:10:42.984 "uuid": "f166b420-9aaa-4e46-97ef-52fa2184c7fc", 00:10:42.984 "strip_size_kb": 64, 00:10:42.984 "state": "online", 00:10:42.984 "raid_level": "concat", 00:10:42.984 "superblock": true, 00:10:42.984 "num_base_bdevs": 4, 00:10:42.984 "num_base_bdevs_discovered": 4, 00:10:42.984 "num_base_bdevs_operational": 4, 00:10:42.984 "base_bdevs_list": [ 00:10:42.984 { 00:10:42.984 "name": "BaseBdev1", 00:10:42.984 "uuid": "93a3d9a1-2f68-575c-ac31-3ea26efb1664", 00:10:42.984 "is_configured": true, 00:10:42.984 "data_offset": 2048, 00:10:42.984 "data_size": 63488 00:10:42.984 }, 00:10:42.984 { 00:10:42.984 "name": "BaseBdev2", 00:10:42.984 "uuid": "97db5e66-6234-5adc-9438-2a6b48f8fe01", 00:10:42.984 "is_configured": true, 00:10:42.984 "data_offset": 2048, 00:10:42.984 "data_size": 63488 00:10:42.984 }, 00:10:42.984 { 00:10:42.984 "name": "BaseBdev3", 00:10:42.984 "uuid": "06714132-59a5-5cf2-9463-9881094f534e", 00:10:42.984 "is_configured": true, 00:10:42.984 "data_offset": 2048, 00:10:42.984 "data_size": 63488 00:10:42.984 }, 00:10:42.984 { 00:10:42.984 "name": "BaseBdev4", 00:10:42.984 "uuid": "7a72c9c5-8883-5515-9981-2aa45f6febeb", 00:10:42.984 "is_configured": true, 00:10:42.984 "data_offset": 2048, 00:10:42.984 "data_size": 63488 00:10:42.984 } 00:10:42.984 ] 00:10:42.984 }' 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.984 13:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.550 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:43.550 13:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:43.808 [2024-11-26 13:23:32.115008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.746 13:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.746 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.746 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.746 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.746 13:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.746 "name": "raid_bdev1", 00:10:44.746 "uuid": "f166b420-9aaa-4e46-97ef-52fa2184c7fc", 00:10:44.746 "strip_size_kb": 64, 00:10:44.746 "state": "online", 00:10:44.746 "raid_level": "concat", 00:10:44.746 "superblock": true, 00:10:44.746 "num_base_bdevs": 4, 00:10:44.746 "num_base_bdevs_discovered": 4, 00:10:44.746 "num_base_bdevs_operational": 4, 00:10:44.746 "base_bdevs_list": [ 00:10:44.746 { 00:10:44.746 "name": "BaseBdev1", 00:10:44.746 "uuid": "93a3d9a1-2f68-575c-ac31-3ea26efb1664", 00:10:44.746 "is_configured": true, 00:10:44.746 "data_offset": 2048, 00:10:44.746 "data_size": 63488 00:10:44.746 }, 00:10:44.746 { 00:10:44.746 "name": "BaseBdev2", 00:10:44.746 "uuid": "97db5e66-6234-5adc-9438-2a6b48f8fe01", 00:10:44.746 "is_configured": true, 00:10:44.746 "data_offset": 2048, 00:10:44.746 "data_size": 63488 00:10:44.746 }, 00:10:44.746 { 00:10:44.746 "name": "BaseBdev3", 00:10:44.746 "uuid": "06714132-59a5-5cf2-9463-9881094f534e", 00:10:44.746 "is_configured": true, 00:10:44.746 "data_offset": 2048, 00:10:44.746 "data_size": 63488 00:10:44.746 }, 00:10:44.746 { 00:10:44.746 "name": "BaseBdev4", 00:10:44.746 "uuid": "7a72c9c5-8883-5515-9981-2aa45f6febeb", 00:10:44.746 "is_configured": true, 00:10:44.746 "data_offset": 2048, 00:10:44.746 "data_size": 63488 00:10:44.746 } 00:10:44.746 ] 00:10:44.746 }' 00:10:44.746 13:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.746 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.005 13:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:45.005 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.005 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.005 [2024-11-26 13:23:33.527956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.005 [2024-11-26 13:23:33.527999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.005 [2024-11-26 13:23:33.531047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.005 [2024-11-26 13:23:33.531121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.005 [2024-11-26 13:23:33.531174] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.005 [2024-11-26 13:23:33.531195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:45.005 { 00:10:45.005 "results": [ 00:10:45.005 { 00:10:45.005 "job": "raid_bdev1", 00:10:45.005 "core_mask": "0x1", 00:10:45.005 "workload": "randrw", 00:10:45.005 "percentage": 50, 00:10:45.005 "status": "finished", 00:10:45.005 "queue_depth": 1, 00:10:45.005 "io_size": 131072, 00:10:45.005 "runtime": 1.410884, 00:10:45.005 "iops": 12991.145976565047, 00:10:45.005 "mibps": 1623.8932470706309, 00:10:45.005 "io_failed": 1, 00:10:45.005 "io_timeout": 0, 00:10:45.005 "avg_latency_us": 107.78762366711302, 00:10:45.005 "min_latency_us": 33.97818181818182, 00:10:45.005 "max_latency_us": 1549.0327272727272 00:10:45.005 } 00:10:45.005 ], 00:10:45.005 "core_count": 1 00:10:45.005 } 00:10:45.005 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.005 13:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72440 00:10:45.005 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72440 ']' 00:10:45.005 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72440 00:10:45.005 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:45.005 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.005 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72440 00:10:45.005 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.005 killing process with pid 72440 00:10:45.005 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.005 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72440' 00:10:45.005 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72440 00:10:45.005 [2024-11-26 13:23:33.568256] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.005 13:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72440 00:10:45.264 [2024-11-26 13:23:33.790057] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:46.201 13:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7mw7C3iiWY 00:10:46.201 13:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:46.201 13:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:46.201 13:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:46.201 13:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:46.201 13:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.201 13:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:46.201 13:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:46.201 00:10:46.201 real 0m4.522s 00:10:46.201 user 0m5.671s 00:10:46.201 sys 0m0.581s 00:10:46.201 13:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.201 13:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.201 ************************************ 00:10:46.201 END TEST raid_read_error_test 00:10:46.201 ************************************ 00:10:46.201 13:23:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:46.201 13:23:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:46.201 13:23:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.201 13:23:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:46.201 ************************************ 00:10:46.201 START TEST raid_write_error_test 00:10:46.201 ************************************ 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xCK4T4LyK7 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72587 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:46.201 13:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72587 00:10:46.202 13:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72587 ']' 00:10:46.202 13:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.202 13:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.202 13:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.202 13:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.202 13:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.461 [2024-11-26 13:23:34.872811] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:10:46.461 [2024-11-26 13:23:34.873006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72587 ] 00:10:46.720 [2024-11-26 13:23:35.063003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.720 [2024-11-26 13:23:35.220682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.979 [2024-11-26 13:23:35.413224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.979 [2024-11-26 13:23:35.413312] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.549 BaseBdev1_malloc 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.549 true 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.549 [2024-11-26 13:23:35.871815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:47.549 [2024-11-26 13:23:35.871898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.549 [2024-11-26 13:23:35.871925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:47.549 [2024-11-26 13:23:35.871942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.549 [2024-11-26 13:23:35.874509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.549 [2024-11-26 13:23:35.874553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:47.549 BaseBdev1 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.549 BaseBdev2_malloc 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.549 true 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.549 [2024-11-26 13:23:35.925052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:47.549 [2024-11-26 13:23:35.925111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.549 [2024-11-26 13:23:35.925133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:47.549 [2024-11-26 13:23:35.925149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.549 [2024-11-26 13:23:35.927753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.549 [2024-11-26 13:23:35.927792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:47.549 BaseBdev2 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.549 BaseBdev3_malloc 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.549 true 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.549 [2024-11-26 13:23:35.986702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:47.549 [2024-11-26 13:23:35.986782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.549 [2024-11-26 13:23:35.986820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:47.549 [2024-11-26 13:23:35.986836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.549 [2024-11-26 13:23:35.989397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.549 [2024-11-26 13:23:35.989437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:47.549 BaseBdev3 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.549 13:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.549 BaseBdev4_malloc 00:10:47.549 13:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.549 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:47.549 13:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.549 13:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.549 true 00:10:47.549 13:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.549 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:47.549 13:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.549 13:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.549 [2024-11-26 13:23:36.040179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:47.549 [2024-11-26 13:23:36.040244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.549 [2024-11-26 13:23:36.040270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:47.549 [2024-11-26 13:23:36.040286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.549 [2024-11-26 13:23:36.042855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.549 [2024-11-26 13:23:36.042898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:47.549 BaseBdev4 00:10:47.549 13:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.549 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:47.549 13:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.550 [2024-11-26 13:23:36.048284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.550 [2024-11-26 13:23:36.050555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.550 [2024-11-26 13:23:36.050660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.550 [2024-11-26 13:23:36.050798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.550 [2024-11-26 13:23:36.051087] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:47.550 [2024-11-26 13:23:36.051120] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:47.550 [2024-11-26 13:23:36.051421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:47.550 [2024-11-26 13:23:36.051659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:47.550 [2024-11-26 13:23:36.051684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:47.550 [2024-11-26 13:23:36.051860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.550 "name": "raid_bdev1", 00:10:47.550 "uuid": "6f524acd-e3fd-4e27-964c-f6dcd86beed5", 00:10:47.550 "strip_size_kb": 64, 00:10:47.550 "state": "online", 00:10:47.550 "raid_level": "concat", 00:10:47.550 "superblock": true, 00:10:47.550 "num_base_bdevs": 4, 00:10:47.550 "num_base_bdevs_discovered": 4, 00:10:47.550 "num_base_bdevs_operational": 4, 00:10:47.550 "base_bdevs_list": [ 00:10:47.550 { 00:10:47.550 "name": "BaseBdev1", 00:10:47.550 "uuid": "3776d21a-5d61-5cf0-b666-a02faefbd6ac", 00:10:47.550 "is_configured": true, 00:10:47.550 "data_offset": 2048, 00:10:47.550 "data_size": 63488 00:10:47.550 }, 00:10:47.550 { 00:10:47.550 "name": "BaseBdev2", 00:10:47.550 "uuid": "2e994d9c-1675-5720-af30-a4eb8fef7c1d", 00:10:47.550 "is_configured": true, 00:10:47.550 "data_offset": 2048, 00:10:47.550 "data_size": 63488 00:10:47.550 }, 00:10:47.550 { 00:10:47.550 "name": "BaseBdev3", 00:10:47.550 "uuid": "d277d031-870b-5267-a605-1372b540ba52", 00:10:47.550 "is_configured": true, 00:10:47.550 "data_offset": 2048, 00:10:47.550 "data_size": 63488 00:10:47.550 }, 00:10:47.550 { 00:10:47.550 "name": "BaseBdev4", 00:10:47.550 "uuid": "19d0815c-906c-5307-b365-ea6646563444", 00:10:47.550 "is_configured": true, 00:10:47.550 "data_offset": 2048, 00:10:47.550 "data_size": 63488 00:10:47.550 } 00:10:47.550 ] 00:10:47.550 }' 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.550 13:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.118 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:48.118 13:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:48.118 [2024-11-26 13:23:36.637513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.055 "name": "raid_bdev1", 00:10:49.055 "uuid": "6f524acd-e3fd-4e27-964c-f6dcd86beed5", 00:10:49.055 "strip_size_kb": 64, 00:10:49.055 "state": "online", 00:10:49.055 "raid_level": "concat", 00:10:49.055 "superblock": true, 00:10:49.055 "num_base_bdevs": 4, 00:10:49.055 "num_base_bdevs_discovered": 4, 00:10:49.055 "num_base_bdevs_operational": 4, 00:10:49.055 "base_bdevs_list": [ 00:10:49.055 { 00:10:49.055 "name": "BaseBdev1", 00:10:49.055 "uuid": "3776d21a-5d61-5cf0-b666-a02faefbd6ac", 00:10:49.055 "is_configured": true, 00:10:49.055 "data_offset": 2048, 00:10:49.055 "data_size": 63488 00:10:49.055 }, 00:10:49.055 { 00:10:49.055 "name": "BaseBdev2", 00:10:49.055 "uuid": "2e994d9c-1675-5720-af30-a4eb8fef7c1d", 00:10:49.055 "is_configured": true, 00:10:49.055 "data_offset": 2048, 00:10:49.055 "data_size": 63488 00:10:49.055 }, 00:10:49.055 { 00:10:49.055 "name": "BaseBdev3", 00:10:49.055 "uuid": "d277d031-870b-5267-a605-1372b540ba52", 00:10:49.055 "is_configured": true, 00:10:49.055 "data_offset": 2048, 00:10:49.055 "data_size": 63488 00:10:49.055 }, 00:10:49.055 { 00:10:49.055 "name": "BaseBdev4", 00:10:49.055 "uuid": "19d0815c-906c-5307-b365-ea6646563444", 00:10:49.055 "is_configured": true, 00:10:49.055 "data_offset": 2048, 00:10:49.055 "data_size": 63488 00:10:49.055 } 00:10:49.055 ] 00:10:49.055 }' 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.055 13:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.623 13:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:49.623 13:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.623 13:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.623 [2024-11-26 13:23:38.071817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.623 [2024-11-26 13:23:38.071865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.623 [2024-11-26 13:23:38.074739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.623 [2024-11-26 13:23:38.074817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.623 [2024-11-26 13:23:38.074877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.623 [2024-11-26 13:23:38.074899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:49.623 { 00:10:49.623 "results": [ 00:10:49.623 { 00:10:49.623 "job": "raid_bdev1", 00:10:49.623 "core_mask": "0x1", 00:10:49.623 "workload": "randrw", 00:10:49.623 "percentage": 50, 00:10:49.623 "status": "finished", 00:10:49.623 "queue_depth": 1, 00:10:49.623 "io_size": 131072, 00:10:49.623 "runtime": 1.431966, 00:10:49.623 "iops": 11908.103963362259, 00:10:49.623 "mibps": 1488.5129954202823, 00:10:49.623 "io_failed": 1, 00:10:49.623 "io_timeout": 0, 00:10:49.623 "avg_latency_us": 118.33419616916244, 00:10:49.623 "min_latency_us": 35.374545454545455, 00:10:49.623 "max_latency_us": 1504.3490909090908 00:10:49.623 } 00:10:49.623 ], 00:10:49.623 "core_count": 1 00:10:49.623 } 00:10:49.623 13:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.623 13:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72587 00:10:49.623 13:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72587 ']' 00:10:49.623 13:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72587 00:10:49.623 13:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:49.623 13:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.623 13:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72587 00:10:49.623 13:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.623 13:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.623 killing process with pid 72587 00:10:49.623 13:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72587' 00:10:49.623 13:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72587 00:10:49.623 [2024-11-26 13:23:38.110852] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.623 13:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72587 00:10:49.883 [2024-11-26 13:23:38.343116] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.821 13:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xCK4T4LyK7 00:10:50.821 13:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:50.821 13:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:50.821 13:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:50.821 13:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:50.821 13:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.821 13:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:50.821 13:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:50.821 00:10:50.821 real 0m4.565s 00:10:50.821 user 0m5.576s 00:10:50.821 sys 0m0.636s 00:10:50.821 13:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.821 ************************************ 00:10:50.821 END TEST raid_write_error_test 00:10:50.821 ************************************ 00:10:50.821 13:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.821 13:23:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:50.821 13:23:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:50.821 13:23:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:50.821 13:23:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.821 13:23:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.821 ************************************ 00:10:50.821 START TEST raid_state_function_test 00:10:50.821 ************************************ 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72727 00:10:50.821 Process raid pid: 72727 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72727' 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:50.821 13:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72727 00:10:50.822 13:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72727 ']' 00:10:50.822 13:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.822 13:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.822 13:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.822 13:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.822 13:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.080 [2024-11-26 13:23:39.478815] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:10:51.080 [2024-11-26 13:23:39.478985] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.339 [2024-11-26 13:23:39.646992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.339 [2024-11-26 13:23:39.758518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.599 [2024-11-26 13:23:39.952528] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.599 [2024-11-26 13:23:39.952568] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.859 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.859 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:51.859 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:51.859 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.859 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.118 [2024-11-26 13:23:40.423031] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.118 [2024-11-26 13:23:40.423117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.118 [2024-11-26 13:23:40.423133] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.118 [2024-11-26 13:23:40.423148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.118 [2024-11-26 13:23:40.423157] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.118 [2024-11-26 13:23:40.423170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.118 [2024-11-26 13:23:40.423179] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:52.118 [2024-11-26 13:23:40.423192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.118 "name": "Existed_Raid", 00:10:52.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.118 "strip_size_kb": 0, 00:10:52.118 "state": "configuring", 00:10:52.118 "raid_level": "raid1", 00:10:52.118 "superblock": false, 00:10:52.118 "num_base_bdevs": 4, 00:10:52.118 "num_base_bdevs_discovered": 0, 00:10:52.118 "num_base_bdevs_operational": 4, 00:10:52.118 "base_bdevs_list": [ 00:10:52.118 { 00:10:52.118 "name": "BaseBdev1", 00:10:52.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.118 "is_configured": false, 00:10:52.118 "data_offset": 0, 00:10:52.118 "data_size": 0 00:10:52.118 }, 00:10:52.118 { 00:10:52.118 "name": "BaseBdev2", 00:10:52.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.118 "is_configured": false, 00:10:52.118 "data_offset": 0, 00:10:52.118 "data_size": 0 00:10:52.118 }, 00:10:52.118 { 00:10:52.118 "name": "BaseBdev3", 00:10:52.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.118 "is_configured": false, 00:10:52.118 "data_offset": 0, 00:10:52.118 "data_size": 0 00:10:52.118 }, 00:10:52.118 { 00:10:52.118 "name": "BaseBdev4", 00:10:52.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.118 "is_configured": false, 00:10:52.118 "data_offset": 0, 00:10:52.118 "data_size": 0 00:10:52.118 } 00:10:52.118 ] 00:10:52.118 }' 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.118 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.377 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.377 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.377 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.650 [2024-11-26 13:23:40.943074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.650 [2024-11-26 13:23:40.943131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.650 [2024-11-26 13:23:40.951078] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.650 [2024-11-26 13:23:40.951151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.650 [2024-11-26 13:23:40.951166] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.650 [2024-11-26 13:23:40.951186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.650 [2024-11-26 13:23:40.951196] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.650 [2024-11-26 13:23:40.951210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.650 [2024-11-26 13:23:40.951218] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:52.650 [2024-11-26 13:23:40.951249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.650 [2024-11-26 13:23:40.995425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.650 BaseBdev1 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.650 13:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.650 [ 00:10:52.650 { 00:10:52.650 "name": "BaseBdev1", 00:10:52.650 "aliases": [ 00:10:52.650 "dda77edf-c83c-4b68-b61c-8de4a9c3ff9e" 00:10:52.650 ], 00:10:52.650 "product_name": "Malloc disk", 00:10:52.650 "block_size": 512, 00:10:52.650 "num_blocks": 65536, 00:10:52.650 "uuid": "dda77edf-c83c-4b68-b61c-8de4a9c3ff9e", 00:10:52.650 "assigned_rate_limits": { 00:10:52.650 "rw_ios_per_sec": 0, 00:10:52.650 "rw_mbytes_per_sec": 0, 00:10:52.650 "r_mbytes_per_sec": 0, 00:10:52.650 "w_mbytes_per_sec": 0 00:10:52.650 }, 00:10:52.650 "claimed": true, 00:10:52.650 "claim_type": "exclusive_write", 00:10:52.650 "zoned": false, 00:10:52.650 "supported_io_types": { 00:10:52.650 "read": true, 00:10:52.650 "write": true, 00:10:52.650 "unmap": true, 00:10:52.650 "flush": true, 00:10:52.650 "reset": true, 00:10:52.650 "nvme_admin": false, 00:10:52.650 "nvme_io": false, 00:10:52.650 "nvme_io_md": false, 00:10:52.650 "write_zeroes": true, 00:10:52.650 "zcopy": true, 00:10:52.650 "get_zone_info": false, 00:10:52.650 "zone_management": false, 00:10:52.650 "zone_append": false, 00:10:52.650 "compare": false, 00:10:52.650 "compare_and_write": false, 00:10:52.650 "abort": true, 00:10:52.650 "seek_hole": false, 00:10:52.650 "seek_data": false, 00:10:52.650 "copy": true, 00:10:52.650 "nvme_iov_md": false 00:10:52.650 }, 00:10:52.650 "memory_domains": [ 00:10:52.650 { 00:10:52.650 "dma_device_id": "system", 00:10:52.650 "dma_device_type": 1 00:10:52.650 }, 00:10:52.650 { 00:10:52.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.650 "dma_device_type": 2 00:10:52.650 } 00:10:52.650 ], 00:10:52.650 "driver_specific": {} 00:10:52.650 } 00:10:52.650 ] 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.650 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.650 "name": "Existed_Raid", 00:10:52.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.650 "strip_size_kb": 0, 00:10:52.650 "state": "configuring", 00:10:52.650 "raid_level": "raid1", 00:10:52.650 "superblock": false, 00:10:52.650 "num_base_bdevs": 4, 00:10:52.650 "num_base_bdevs_discovered": 1, 00:10:52.650 "num_base_bdevs_operational": 4, 00:10:52.650 "base_bdevs_list": [ 00:10:52.650 { 00:10:52.650 "name": "BaseBdev1", 00:10:52.650 "uuid": "dda77edf-c83c-4b68-b61c-8de4a9c3ff9e", 00:10:52.650 "is_configured": true, 00:10:52.650 "data_offset": 0, 00:10:52.650 "data_size": 65536 00:10:52.650 }, 00:10:52.650 { 00:10:52.650 "name": "BaseBdev2", 00:10:52.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.651 "is_configured": false, 00:10:52.651 "data_offset": 0, 00:10:52.651 "data_size": 0 00:10:52.651 }, 00:10:52.651 { 00:10:52.651 "name": "BaseBdev3", 00:10:52.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.651 "is_configured": false, 00:10:52.651 "data_offset": 0, 00:10:52.651 "data_size": 0 00:10:52.651 }, 00:10:52.651 { 00:10:52.651 "name": "BaseBdev4", 00:10:52.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.651 "is_configured": false, 00:10:52.651 "data_offset": 0, 00:10:52.651 "data_size": 0 00:10:52.651 } 00:10:52.651 ] 00:10:52.651 }' 00:10:52.651 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.651 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.257 [2024-11-26 13:23:41.539557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.257 [2024-11-26 13:23:41.539651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.257 [2024-11-26 13:23:41.547626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.257 [2024-11-26 13:23:41.549886] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.257 [2024-11-26 13:23:41.549930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.257 [2024-11-26 13:23:41.549944] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.257 [2024-11-26 13:23:41.549958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.257 [2024-11-26 13:23:41.549967] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.257 [2024-11-26 13:23:41.549978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.257 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.258 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.258 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.258 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.258 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.258 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.258 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.258 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.258 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.258 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.258 "name": "Existed_Raid", 00:10:53.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.258 "strip_size_kb": 0, 00:10:53.258 "state": "configuring", 00:10:53.258 "raid_level": "raid1", 00:10:53.258 "superblock": false, 00:10:53.258 "num_base_bdevs": 4, 00:10:53.258 "num_base_bdevs_discovered": 1, 00:10:53.258 "num_base_bdevs_operational": 4, 00:10:53.258 "base_bdevs_list": [ 00:10:53.258 { 00:10:53.258 "name": "BaseBdev1", 00:10:53.258 "uuid": "dda77edf-c83c-4b68-b61c-8de4a9c3ff9e", 00:10:53.258 "is_configured": true, 00:10:53.258 "data_offset": 0, 00:10:53.258 "data_size": 65536 00:10:53.258 }, 00:10:53.258 { 00:10:53.258 "name": "BaseBdev2", 00:10:53.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.258 "is_configured": false, 00:10:53.258 "data_offset": 0, 00:10:53.258 "data_size": 0 00:10:53.258 }, 00:10:53.258 { 00:10:53.258 "name": "BaseBdev3", 00:10:53.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.258 "is_configured": false, 00:10:53.258 "data_offset": 0, 00:10:53.258 "data_size": 0 00:10:53.258 }, 00:10:53.258 { 00:10:53.258 "name": "BaseBdev4", 00:10:53.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.258 "is_configured": false, 00:10:53.258 "data_offset": 0, 00:10:53.258 "data_size": 0 00:10:53.258 } 00:10:53.258 ] 00:10:53.258 }' 00:10:53.258 13:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.258 13:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.517 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:53.517 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.517 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.776 [2024-11-26 13:23:42.112747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.776 BaseBdev2 00:10:53.776 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.776 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:53.776 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:53.776 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.776 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:53.776 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.776 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.776 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.776 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.776 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.776 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.776 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:53.776 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.776 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.776 [ 00:10:53.776 { 00:10:53.776 "name": "BaseBdev2", 00:10:53.776 "aliases": [ 00:10:53.776 "11b9ce0d-d35a-41c8-9f21-bf641aac8c32" 00:10:53.776 ], 00:10:53.776 "product_name": "Malloc disk", 00:10:53.776 "block_size": 512, 00:10:53.776 "num_blocks": 65536, 00:10:53.776 "uuid": "11b9ce0d-d35a-41c8-9f21-bf641aac8c32", 00:10:53.776 "assigned_rate_limits": { 00:10:53.776 "rw_ios_per_sec": 0, 00:10:53.776 "rw_mbytes_per_sec": 0, 00:10:53.776 "r_mbytes_per_sec": 0, 00:10:53.776 "w_mbytes_per_sec": 0 00:10:53.776 }, 00:10:53.776 "claimed": true, 00:10:53.776 "claim_type": "exclusive_write", 00:10:53.776 "zoned": false, 00:10:53.776 "supported_io_types": { 00:10:53.776 "read": true, 00:10:53.776 "write": true, 00:10:53.776 "unmap": true, 00:10:53.776 "flush": true, 00:10:53.776 "reset": true, 00:10:53.776 "nvme_admin": false, 00:10:53.776 "nvme_io": false, 00:10:53.776 "nvme_io_md": false, 00:10:53.777 "write_zeroes": true, 00:10:53.777 "zcopy": true, 00:10:53.777 "get_zone_info": false, 00:10:53.777 "zone_management": false, 00:10:53.777 "zone_append": false, 00:10:53.777 "compare": false, 00:10:53.777 "compare_and_write": false, 00:10:53.777 "abort": true, 00:10:53.777 "seek_hole": false, 00:10:53.777 "seek_data": false, 00:10:53.777 "copy": true, 00:10:53.777 "nvme_iov_md": false 00:10:53.777 }, 00:10:53.777 "memory_domains": [ 00:10:53.777 { 00:10:53.777 "dma_device_id": "system", 00:10:53.777 "dma_device_type": 1 00:10:53.777 }, 00:10:53.777 { 00:10:53.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.777 "dma_device_type": 2 00:10:53.777 } 00:10:53.777 ], 00:10:53.777 "driver_specific": {} 00:10:53.777 } 00:10:53.777 ] 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.777 "name": "Existed_Raid", 00:10:53.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.777 "strip_size_kb": 0, 00:10:53.777 "state": "configuring", 00:10:53.777 "raid_level": "raid1", 00:10:53.777 "superblock": false, 00:10:53.777 "num_base_bdevs": 4, 00:10:53.777 "num_base_bdevs_discovered": 2, 00:10:53.777 "num_base_bdevs_operational": 4, 00:10:53.777 "base_bdevs_list": [ 00:10:53.777 { 00:10:53.777 "name": "BaseBdev1", 00:10:53.777 "uuid": "dda77edf-c83c-4b68-b61c-8de4a9c3ff9e", 00:10:53.777 "is_configured": true, 00:10:53.777 "data_offset": 0, 00:10:53.777 "data_size": 65536 00:10:53.777 }, 00:10:53.777 { 00:10:53.777 "name": "BaseBdev2", 00:10:53.777 "uuid": "11b9ce0d-d35a-41c8-9f21-bf641aac8c32", 00:10:53.777 "is_configured": true, 00:10:53.777 "data_offset": 0, 00:10:53.777 "data_size": 65536 00:10:53.777 }, 00:10:53.777 { 00:10:53.777 "name": "BaseBdev3", 00:10:53.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.777 "is_configured": false, 00:10:53.777 "data_offset": 0, 00:10:53.777 "data_size": 0 00:10:53.777 }, 00:10:53.777 { 00:10:53.777 "name": "BaseBdev4", 00:10:53.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.777 "is_configured": false, 00:10:53.777 "data_offset": 0, 00:10:53.777 "data_size": 0 00:10:53.777 } 00:10:53.777 ] 00:10:53.777 }' 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.777 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.346 [2024-11-26 13:23:42.704707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.346 BaseBdev3 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.346 [ 00:10:54.346 { 00:10:54.346 "name": "BaseBdev3", 00:10:54.346 "aliases": [ 00:10:54.346 "a635f251-231c-4099-b5b8-ec2d1e7488c0" 00:10:54.346 ], 00:10:54.346 "product_name": "Malloc disk", 00:10:54.346 "block_size": 512, 00:10:54.346 "num_blocks": 65536, 00:10:54.346 "uuid": "a635f251-231c-4099-b5b8-ec2d1e7488c0", 00:10:54.346 "assigned_rate_limits": { 00:10:54.346 "rw_ios_per_sec": 0, 00:10:54.346 "rw_mbytes_per_sec": 0, 00:10:54.346 "r_mbytes_per_sec": 0, 00:10:54.346 "w_mbytes_per_sec": 0 00:10:54.346 }, 00:10:54.346 "claimed": true, 00:10:54.346 "claim_type": "exclusive_write", 00:10:54.346 "zoned": false, 00:10:54.346 "supported_io_types": { 00:10:54.346 "read": true, 00:10:54.346 "write": true, 00:10:54.346 "unmap": true, 00:10:54.346 "flush": true, 00:10:54.346 "reset": true, 00:10:54.346 "nvme_admin": false, 00:10:54.346 "nvme_io": false, 00:10:54.346 "nvme_io_md": false, 00:10:54.346 "write_zeroes": true, 00:10:54.346 "zcopy": true, 00:10:54.346 "get_zone_info": false, 00:10:54.346 "zone_management": false, 00:10:54.346 "zone_append": false, 00:10:54.346 "compare": false, 00:10:54.346 "compare_and_write": false, 00:10:54.346 "abort": true, 00:10:54.346 "seek_hole": false, 00:10:54.346 "seek_data": false, 00:10:54.346 "copy": true, 00:10:54.346 "nvme_iov_md": false 00:10:54.346 }, 00:10:54.346 "memory_domains": [ 00:10:54.346 { 00:10:54.346 "dma_device_id": "system", 00:10:54.346 "dma_device_type": 1 00:10:54.346 }, 00:10:54.346 { 00:10:54.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.346 "dma_device_type": 2 00:10:54.346 } 00:10:54.346 ], 00:10:54.346 "driver_specific": {} 00:10:54.346 } 00:10:54.346 ] 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.346 "name": "Existed_Raid", 00:10:54.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.346 "strip_size_kb": 0, 00:10:54.346 "state": "configuring", 00:10:54.346 "raid_level": "raid1", 00:10:54.346 "superblock": false, 00:10:54.346 "num_base_bdevs": 4, 00:10:54.346 "num_base_bdevs_discovered": 3, 00:10:54.346 "num_base_bdevs_operational": 4, 00:10:54.346 "base_bdevs_list": [ 00:10:54.346 { 00:10:54.346 "name": "BaseBdev1", 00:10:54.346 "uuid": "dda77edf-c83c-4b68-b61c-8de4a9c3ff9e", 00:10:54.346 "is_configured": true, 00:10:54.346 "data_offset": 0, 00:10:54.346 "data_size": 65536 00:10:54.346 }, 00:10:54.346 { 00:10:54.346 "name": "BaseBdev2", 00:10:54.346 "uuid": "11b9ce0d-d35a-41c8-9f21-bf641aac8c32", 00:10:54.346 "is_configured": true, 00:10:54.346 "data_offset": 0, 00:10:54.346 "data_size": 65536 00:10:54.346 }, 00:10:54.346 { 00:10:54.346 "name": "BaseBdev3", 00:10:54.346 "uuid": "a635f251-231c-4099-b5b8-ec2d1e7488c0", 00:10:54.346 "is_configured": true, 00:10:54.346 "data_offset": 0, 00:10:54.346 "data_size": 65536 00:10:54.346 }, 00:10:54.346 { 00:10:54.346 "name": "BaseBdev4", 00:10:54.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.346 "is_configured": false, 00:10:54.346 "data_offset": 0, 00:10:54.346 "data_size": 0 00:10:54.346 } 00:10:54.346 ] 00:10:54.346 }' 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.346 13:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.915 [2024-11-26 13:23:43.277363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:54.915 [2024-11-26 13:23:43.277417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:54.915 [2024-11-26 13:23:43.277428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:54.915 [2024-11-26 13:23:43.277731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:54.915 [2024-11-26 13:23:43.277935] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:54.915 [2024-11-26 13:23:43.277956] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:54.915 [2024-11-26 13:23:43.278220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.915 BaseBdev4 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.915 [ 00:10:54.915 { 00:10:54.915 "name": "BaseBdev4", 00:10:54.915 "aliases": [ 00:10:54.915 "7cc7abb5-48e7-4539-b391-22e9d279fa93" 00:10:54.915 ], 00:10:54.915 "product_name": "Malloc disk", 00:10:54.915 "block_size": 512, 00:10:54.915 "num_blocks": 65536, 00:10:54.915 "uuid": "7cc7abb5-48e7-4539-b391-22e9d279fa93", 00:10:54.915 "assigned_rate_limits": { 00:10:54.915 "rw_ios_per_sec": 0, 00:10:54.915 "rw_mbytes_per_sec": 0, 00:10:54.915 "r_mbytes_per_sec": 0, 00:10:54.915 "w_mbytes_per_sec": 0 00:10:54.915 }, 00:10:54.915 "claimed": true, 00:10:54.915 "claim_type": "exclusive_write", 00:10:54.915 "zoned": false, 00:10:54.915 "supported_io_types": { 00:10:54.915 "read": true, 00:10:54.915 "write": true, 00:10:54.915 "unmap": true, 00:10:54.915 "flush": true, 00:10:54.915 "reset": true, 00:10:54.915 "nvme_admin": false, 00:10:54.915 "nvme_io": false, 00:10:54.915 "nvme_io_md": false, 00:10:54.915 "write_zeroes": true, 00:10:54.915 "zcopy": true, 00:10:54.915 "get_zone_info": false, 00:10:54.915 "zone_management": false, 00:10:54.915 "zone_append": false, 00:10:54.915 "compare": false, 00:10:54.915 "compare_and_write": false, 00:10:54.915 "abort": true, 00:10:54.915 "seek_hole": false, 00:10:54.915 "seek_data": false, 00:10:54.915 "copy": true, 00:10:54.915 "nvme_iov_md": false 00:10:54.915 }, 00:10:54.915 "memory_domains": [ 00:10:54.915 { 00:10:54.915 "dma_device_id": "system", 00:10:54.915 "dma_device_type": 1 00:10:54.915 }, 00:10:54.915 { 00:10:54.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.915 "dma_device_type": 2 00:10:54.915 } 00:10:54.915 ], 00:10:54.915 "driver_specific": {} 00:10:54.915 } 00:10:54.915 ] 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.915 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.916 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.916 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.916 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.916 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.916 "name": "Existed_Raid", 00:10:54.916 "uuid": "b793bdf4-4b44-433e-bfb0-3ef254dd68a0", 00:10:54.916 "strip_size_kb": 0, 00:10:54.916 "state": "online", 00:10:54.916 "raid_level": "raid1", 00:10:54.916 "superblock": false, 00:10:54.916 "num_base_bdevs": 4, 00:10:54.916 "num_base_bdevs_discovered": 4, 00:10:54.916 "num_base_bdevs_operational": 4, 00:10:54.916 "base_bdevs_list": [ 00:10:54.916 { 00:10:54.916 "name": "BaseBdev1", 00:10:54.916 "uuid": "dda77edf-c83c-4b68-b61c-8de4a9c3ff9e", 00:10:54.916 "is_configured": true, 00:10:54.916 "data_offset": 0, 00:10:54.916 "data_size": 65536 00:10:54.916 }, 00:10:54.916 { 00:10:54.916 "name": "BaseBdev2", 00:10:54.916 "uuid": "11b9ce0d-d35a-41c8-9f21-bf641aac8c32", 00:10:54.916 "is_configured": true, 00:10:54.916 "data_offset": 0, 00:10:54.916 "data_size": 65536 00:10:54.916 }, 00:10:54.916 { 00:10:54.916 "name": "BaseBdev3", 00:10:54.916 "uuid": "a635f251-231c-4099-b5b8-ec2d1e7488c0", 00:10:54.916 "is_configured": true, 00:10:54.916 "data_offset": 0, 00:10:54.916 "data_size": 65536 00:10:54.916 }, 00:10:54.916 { 00:10:54.916 "name": "BaseBdev4", 00:10:54.916 "uuid": "7cc7abb5-48e7-4539-b391-22e9d279fa93", 00:10:54.916 "is_configured": true, 00:10:54.916 "data_offset": 0, 00:10:54.916 "data_size": 65536 00:10:54.916 } 00:10:54.916 ] 00:10:54.916 }' 00:10:54.916 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.916 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.485 [2024-11-26 13:23:43.825795] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:55.485 "name": "Existed_Raid", 00:10:55.485 "aliases": [ 00:10:55.485 "b793bdf4-4b44-433e-bfb0-3ef254dd68a0" 00:10:55.485 ], 00:10:55.485 "product_name": "Raid Volume", 00:10:55.485 "block_size": 512, 00:10:55.485 "num_blocks": 65536, 00:10:55.485 "uuid": "b793bdf4-4b44-433e-bfb0-3ef254dd68a0", 00:10:55.485 "assigned_rate_limits": { 00:10:55.485 "rw_ios_per_sec": 0, 00:10:55.485 "rw_mbytes_per_sec": 0, 00:10:55.485 "r_mbytes_per_sec": 0, 00:10:55.485 "w_mbytes_per_sec": 0 00:10:55.485 }, 00:10:55.485 "claimed": false, 00:10:55.485 "zoned": false, 00:10:55.485 "supported_io_types": { 00:10:55.485 "read": true, 00:10:55.485 "write": true, 00:10:55.485 "unmap": false, 00:10:55.485 "flush": false, 00:10:55.485 "reset": true, 00:10:55.485 "nvme_admin": false, 00:10:55.485 "nvme_io": false, 00:10:55.485 "nvme_io_md": false, 00:10:55.485 "write_zeroes": true, 00:10:55.485 "zcopy": false, 00:10:55.485 "get_zone_info": false, 00:10:55.485 "zone_management": false, 00:10:55.485 "zone_append": false, 00:10:55.485 "compare": false, 00:10:55.485 "compare_and_write": false, 00:10:55.485 "abort": false, 00:10:55.485 "seek_hole": false, 00:10:55.485 "seek_data": false, 00:10:55.485 "copy": false, 00:10:55.485 "nvme_iov_md": false 00:10:55.485 }, 00:10:55.485 "memory_domains": [ 00:10:55.485 { 00:10:55.485 "dma_device_id": "system", 00:10:55.485 "dma_device_type": 1 00:10:55.485 }, 00:10:55.485 { 00:10:55.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.485 "dma_device_type": 2 00:10:55.485 }, 00:10:55.485 { 00:10:55.485 "dma_device_id": "system", 00:10:55.485 "dma_device_type": 1 00:10:55.485 }, 00:10:55.485 { 00:10:55.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.485 "dma_device_type": 2 00:10:55.485 }, 00:10:55.485 { 00:10:55.485 "dma_device_id": "system", 00:10:55.485 "dma_device_type": 1 00:10:55.485 }, 00:10:55.485 { 00:10:55.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.485 "dma_device_type": 2 00:10:55.485 }, 00:10:55.485 { 00:10:55.485 "dma_device_id": "system", 00:10:55.485 "dma_device_type": 1 00:10:55.485 }, 00:10:55.485 { 00:10:55.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.485 "dma_device_type": 2 00:10:55.485 } 00:10:55.485 ], 00:10:55.485 "driver_specific": { 00:10:55.485 "raid": { 00:10:55.485 "uuid": "b793bdf4-4b44-433e-bfb0-3ef254dd68a0", 00:10:55.485 "strip_size_kb": 0, 00:10:55.485 "state": "online", 00:10:55.485 "raid_level": "raid1", 00:10:55.485 "superblock": false, 00:10:55.485 "num_base_bdevs": 4, 00:10:55.485 "num_base_bdevs_discovered": 4, 00:10:55.485 "num_base_bdevs_operational": 4, 00:10:55.485 "base_bdevs_list": [ 00:10:55.485 { 00:10:55.485 "name": "BaseBdev1", 00:10:55.485 "uuid": "dda77edf-c83c-4b68-b61c-8de4a9c3ff9e", 00:10:55.485 "is_configured": true, 00:10:55.485 "data_offset": 0, 00:10:55.485 "data_size": 65536 00:10:55.485 }, 00:10:55.485 { 00:10:55.485 "name": "BaseBdev2", 00:10:55.485 "uuid": "11b9ce0d-d35a-41c8-9f21-bf641aac8c32", 00:10:55.485 "is_configured": true, 00:10:55.485 "data_offset": 0, 00:10:55.485 "data_size": 65536 00:10:55.485 }, 00:10:55.485 { 00:10:55.485 "name": "BaseBdev3", 00:10:55.485 "uuid": "a635f251-231c-4099-b5b8-ec2d1e7488c0", 00:10:55.485 "is_configured": true, 00:10:55.485 "data_offset": 0, 00:10:55.485 "data_size": 65536 00:10:55.485 }, 00:10:55.485 { 00:10:55.485 "name": "BaseBdev4", 00:10:55.485 "uuid": "7cc7abb5-48e7-4539-b391-22e9d279fa93", 00:10:55.485 "is_configured": true, 00:10:55.485 "data_offset": 0, 00:10:55.485 "data_size": 65536 00:10:55.485 } 00:10:55.485 ] 00:10:55.485 } 00:10:55.485 } 00:10:55.485 }' 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:55.485 BaseBdev2 00:10:55.485 BaseBdev3 00:10:55.485 BaseBdev4' 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.485 13:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.485 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.485 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.485 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.485 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:55.485 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.485 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.485 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.485 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.745 [2024-11-26 13:23:44.189654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.745 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.004 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.004 "name": "Existed_Raid", 00:10:56.004 "uuid": "b793bdf4-4b44-433e-bfb0-3ef254dd68a0", 00:10:56.004 "strip_size_kb": 0, 00:10:56.004 "state": "online", 00:10:56.004 "raid_level": "raid1", 00:10:56.004 "superblock": false, 00:10:56.004 "num_base_bdevs": 4, 00:10:56.004 "num_base_bdevs_discovered": 3, 00:10:56.004 "num_base_bdevs_operational": 3, 00:10:56.004 "base_bdevs_list": [ 00:10:56.004 { 00:10:56.004 "name": null, 00:10:56.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.004 "is_configured": false, 00:10:56.004 "data_offset": 0, 00:10:56.004 "data_size": 65536 00:10:56.004 }, 00:10:56.004 { 00:10:56.004 "name": "BaseBdev2", 00:10:56.004 "uuid": "11b9ce0d-d35a-41c8-9f21-bf641aac8c32", 00:10:56.004 "is_configured": true, 00:10:56.004 "data_offset": 0, 00:10:56.004 "data_size": 65536 00:10:56.004 }, 00:10:56.004 { 00:10:56.004 "name": "BaseBdev3", 00:10:56.004 "uuid": "a635f251-231c-4099-b5b8-ec2d1e7488c0", 00:10:56.004 "is_configured": true, 00:10:56.004 "data_offset": 0, 00:10:56.004 "data_size": 65536 00:10:56.004 }, 00:10:56.004 { 00:10:56.004 "name": "BaseBdev4", 00:10:56.004 "uuid": "7cc7abb5-48e7-4539-b391-22e9d279fa93", 00:10:56.004 "is_configured": true, 00:10:56.004 "data_offset": 0, 00:10:56.004 "data_size": 65536 00:10:56.004 } 00:10:56.004 ] 00:10:56.004 }' 00:10:56.005 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.005 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.264 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:56.264 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.264 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.264 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.264 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.264 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.264 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.264 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.264 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.264 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:56.264 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.264 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.264 [2024-11-26 13:23:44.825902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:56.523 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.523 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.523 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.523 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.523 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.523 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.523 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.523 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.523 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.523 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.523 13:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:56.523 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.523 13:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.523 [2024-11-26 13:23:44.950907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:56.523 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.523 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.523 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.523 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.523 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.523 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.523 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.523 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.523 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.523 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.523 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:56.523 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.523 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.523 [2024-11-26 13:23:45.074282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:56.523 [2024-11-26 13:23:45.074390] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.783 [2024-11-26 13:23:45.143797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.783 [2024-11-26 13:23:45.143849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.783 [2024-11-26 13:23:45.143867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.783 BaseBdev2 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.783 [ 00:10:56.783 { 00:10:56.783 "name": "BaseBdev2", 00:10:56.783 "aliases": [ 00:10:56.783 "89c5b86d-8769-44b6-bfb5-8f04ae355168" 00:10:56.783 ], 00:10:56.783 "product_name": "Malloc disk", 00:10:56.783 "block_size": 512, 00:10:56.783 "num_blocks": 65536, 00:10:56.783 "uuid": "89c5b86d-8769-44b6-bfb5-8f04ae355168", 00:10:56.783 "assigned_rate_limits": { 00:10:56.783 "rw_ios_per_sec": 0, 00:10:56.783 "rw_mbytes_per_sec": 0, 00:10:56.783 "r_mbytes_per_sec": 0, 00:10:56.783 "w_mbytes_per_sec": 0 00:10:56.783 }, 00:10:56.783 "claimed": false, 00:10:56.783 "zoned": false, 00:10:56.783 "supported_io_types": { 00:10:56.783 "read": true, 00:10:56.783 "write": true, 00:10:56.783 "unmap": true, 00:10:56.783 "flush": true, 00:10:56.783 "reset": true, 00:10:56.783 "nvme_admin": false, 00:10:56.783 "nvme_io": false, 00:10:56.783 "nvme_io_md": false, 00:10:56.783 "write_zeroes": true, 00:10:56.783 "zcopy": true, 00:10:56.783 "get_zone_info": false, 00:10:56.783 "zone_management": false, 00:10:56.783 "zone_append": false, 00:10:56.783 "compare": false, 00:10:56.783 "compare_and_write": false, 00:10:56.783 "abort": true, 00:10:56.783 "seek_hole": false, 00:10:56.783 "seek_data": false, 00:10:56.783 "copy": true, 00:10:56.783 "nvme_iov_md": false 00:10:56.783 }, 00:10:56.783 "memory_domains": [ 00:10:56.783 { 00:10:56.783 "dma_device_id": "system", 00:10:56.783 "dma_device_type": 1 00:10:56.783 }, 00:10:56.783 { 00:10:56.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.783 "dma_device_type": 2 00:10:56.783 } 00:10:56.783 ], 00:10:56.783 "driver_specific": {} 00:10:56.783 } 00:10:56.783 ] 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.783 BaseBdev3 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.783 [ 00:10:56.783 { 00:10:56.783 "name": "BaseBdev3", 00:10:56.783 "aliases": [ 00:10:56.783 "5540abb7-1591-4ed6-aa97-8536e314c49b" 00:10:56.783 ], 00:10:56.783 "product_name": "Malloc disk", 00:10:56.783 "block_size": 512, 00:10:56.783 "num_blocks": 65536, 00:10:56.783 "uuid": "5540abb7-1591-4ed6-aa97-8536e314c49b", 00:10:56.783 "assigned_rate_limits": { 00:10:56.783 "rw_ios_per_sec": 0, 00:10:56.783 "rw_mbytes_per_sec": 0, 00:10:56.783 "r_mbytes_per_sec": 0, 00:10:56.783 "w_mbytes_per_sec": 0 00:10:56.783 }, 00:10:56.783 "claimed": false, 00:10:56.783 "zoned": false, 00:10:56.783 "supported_io_types": { 00:10:56.783 "read": true, 00:10:56.783 "write": true, 00:10:56.783 "unmap": true, 00:10:56.783 "flush": true, 00:10:56.783 "reset": true, 00:10:56.783 "nvme_admin": false, 00:10:56.783 "nvme_io": false, 00:10:56.783 "nvme_io_md": false, 00:10:56.783 "write_zeroes": true, 00:10:56.783 "zcopy": true, 00:10:56.783 "get_zone_info": false, 00:10:56.783 "zone_management": false, 00:10:56.783 "zone_append": false, 00:10:56.783 "compare": false, 00:10:56.783 "compare_and_write": false, 00:10:56.783 "abort": true, 00:10:56.783 "seek_hole": false, 00:10:56.783 "seek_data": false, 00:10:56.783 "copy": true, 00:10:56.783 "nvme_iov_md": false 00:10:56.783 }, 00:10:56.783 "memory_domains": [ 00:10:56.783 { 00:10:56.783 "dma_device_id": "system", 00:10:56.783 "dma_device_type": 1 00:10:56.783 }, 00:10:56.783 { 00:10:56.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.783 "dma_device_type": 2 00:10:56.783 } 00:10:56.783 ], 00:10:56.783 "driver_specific": {} 00:10:56.783 } 00:10:56.783 ] 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:56.783 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:56.784 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.784 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:56.784 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.784 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.043 BaseBdev4 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.043 [ 00:10:57.043 { 00:10:57.043 "name": "BaseBdev4", 00:10:57.043 "aliases": [ 00:10:57.043 "0a891da2-da25-44b3-9268-bce8a9b1bdf9" 00:10:57.043 ], 00:10:57.043 "product_name": "Malloc disk", 00:10:57.043 "block_size": 512, 00:10:57.043 "num_blocks": 65536, 00:10:57.043 "uuid": "0a891da2-da25-44b3-9268-bce8a9b1bdf9", 00:10:57.043 "assigned_rate_limits": { 00:10:57.043 "rw_ios_per_sec": 0, 00:10:57.043 "rw_mbytes_per_sec": 0, 00:10:57.043 "r_mbytes_per_sec": 0, 00:10:57.043 "w_mbytes_per_sec": 0 00:10:57.043 }, 00:10:57.043 "claimed": false, 00:10:57.043 "zoned": false, 00:10:57.043 "supported_io_types": { 00:10:57.043 "read": true, 00:10:57.043 "write": true, 00:10:57.043 "unmap": true, 00:10:57.043 "flush": true, 00:10:57.043 "reset": true, 00:10:57.043 "nvme_admin": false, 00:10:57.043 "nvme_io": false, 00:10:57.043 "nvme_io_md": false, 00:10:57.043 "write_zeroes": true, 00:10:57.043 "zcopy": true, 00:10:57.043 "get_zone_info": false, 00:10:57.043 "zone_management": false, 00:10:57.043 "zone_append": false, 00:10:57.043 "compare": false, 00:10:57.043 "compare_and_write": false, 00:10:57.043 "abort": true, 00:10:57.043 "seek_hole": false, 00:10:57.043 "seek_data": false, 00:10:57.043 "copy": true, 00:10:57.043 "nvme_iov_md": false 00:10:57.043 }, 00:10:57.043 "memory_domains": [ 00:10:57.043 { 00:10:57.043 "dma_device_id": "system", 00:10:57.043 "dma_device_type": 1 00:10:57.043 }, 00:10:57.043 { 00:10:57.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.043 "dma_device_type": 2 00:10:57.043 } 00:10:57.043 ], 00:10:57.043 "driver_specific": {} 00:10:57.043 } 00:10:57.043 ] 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.043 [2024-11-26 13:23:45.392104] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.043 [2024-11-26 13:23:45.392168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.043 [2024-11-26 13:23:45.392194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.043 [2024-11-26 13:23:45.394492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.043 [2024-11-26 13:23:45.394563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.043 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.043 "name": "Existed_Raid", 00:10:57.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.043 "strip_size_kb": 0, 00:10:57.043 "state": "configuring", 00:10:57.043 "raid_level": "raid1", 00:10:57.043 "superblock": false, 00:10:57.043 "num_base_bdevs": 4, 00:10:57.043 "num_base_bdevs_discovered": 3, 00:10:57.043 "num_base_bdevs_operational": 4, 00:10:57.043 "base_bdevs_list": [ 00:10:57.043 { 00:10:57.043 "name": "BaseBdev1", 00:10:57.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.043 "is_configured": false, 00:10:57.043 "data_offset": 0, 00:10:57.043 "data_size": 0 00:10:57.043 }, 00:10:57.043 { 00:10:57.043 "name": "BaseBdev2", 00:10:57.043 "uuid": "89c5b86d-8769-44b6-bfb5-8f04ae355168", 00:10:57.043 "is_configured": true, 00:10:57.044 "data_offset": 0, 00:10:57.044 "data_size": 65536 00:10:57.044 }, 00:10:57.044 { 00:10:57.044 "name": "BaseBdev3", 00:10:57.044 "uuid": "5540abb7-1591-4ed6-aa97-8536e314c49b", 00:10:57.044 "is_configured": true, 00:10:57.044 "data_offset": 0, 00:10:57.044 "data_size": 65536 00:10:57.044 }, 00:10:57.044 { 00:10:57.044 "name": "BaseBdev4", 00:10:57.044 "uuid": "0a891da2-da25-44b3-9268-bce8a9b1bdf9", 00:10:57.044 "is_configured": true, 00:10:57.044 "data_offset": 0, 00:10:57.044 "data_size": 65536 00:10:57.044 } 00:10:57.044 ] 00:10:57.044 }' 00:10:57.044 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.044 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.612 [2024-11-26 13:23:45.900183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.612 "name": "Existed_Raid", 00:10:57.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.612 "strip_size_kb": 0, 00:10:57.612 "state": "configuring", 00:10:57.612 "raid_level": "raid1", 00:10:57.612 "superblock": false, 00:10:57.612 "num_base_bdevs": 4, 00:10:57.612 "num_base_bdevs_discovered": 2, 00:10:57.612 "num_base_bdevs_operational": 4, 00:10:57.612 "base_bdevs_list": [ 00:10:57.612 { 00:10:57.612 "name": "BaseBdev1", 00:10:57.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.612 "is_configured": false, 00:10:57.612 "data_offset": 0, 00:10:57.612 "data_size": 0 00:10:57.612 }, 00:10:57.612 { 00:10:57.612 "name": null, 00:10:57.612 "uuid": "89c5b86d-8769-44b6-bfb5-8f04ae355168", 00:10:57.612 "is_configured": false, 00:10:57.612 "data_offset": 0, 00:10:57.612 "data_size": 65536 00:10:57.612 }, 00:10:57.612 { 00:10:57.612 "name": "BaseBdev3", 00:10:57.612 "uuid": "5540abb7-1591-4ed6-aa97-8536e314c49b", 00:10:57.612 "is_configured": true, 00:10:57.612 "data_offset": 0, 00:10:57.612 "data_size": 65536 00:10:57.612 }, 00:10:57.612 { 00:10:57.612 "name": "BaseBdev4", 00:10:57.612 "uuid": "0a891da2-da25-44b3-9268-bce8a9b1bdf9", 00:10:57.612 "is_configured": true, 00:10:57.612 "data_offset": 0, 00:10:57.612 "data_size": 65536 00:10:57.612 } 00:10:57.612 ] 00:10:57.612 }' 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.612 13:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.871 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.871 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:57.871 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.871 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.871 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.130 [2024-11-26 13:23:46.495634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.130 BaseBdev1 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.130 [ 00:10:58.130 { 00:10:58.130 "name": "BaseBdev1", 00:10:58.130 "aliases": [ 00:10:58.130 "2f0fac0f-e132-4026-8fac-7c1209591255" 00:10:58.130 ], 00:10:58.130 "product_name": "Malloc disk", 00:10:58.130 "block_size": 512, 00:10:58.130 "num_blocks": 65536, 00:10:58.130 "uuid": "2f0fac0f-e132-4026-8fac-7c1209591255", 00:10:58.130 "assigned_rate_limits": { 00:10:58.130 "rw_ios_per_sec": 0, 00:10:58.130 "rw_mbytes_per_sec": 0, 00:10:58.130 "r_mbytes_per_sec": 0, 00:10:58.130 "w_mbytes_per_sec": 0 00:10:58.130 }, 00:10:58.130 "claimed": true, 00:10:58.130 "claim_type": "exclusive_write", 00:10:58.130 "zoned": false, 00:10:58.130 "supported_io_types": { 00:10:58.130 "read": true, 00:10:58.130 "write": true, 00:10:58.130 "unmap": true, 00:10:58.130 "flush": true, 00:10:58.130 "reset": true, 00:10:58.130 "nvme_admin": false, 00:10:58.130 "nvme_io": false, 00:10:58.130 "nvme_io_md": false, 00:10:58.130 "write_zeroes": true, 00:10:58.130 "zcopy": true, 00:10:58.130 "get_zone_info": false, 00:10:58.130 "zone_management": false, 00:10:58.130 "zone_append": false, 00:10:58.130 "compare": false, 00:10:58.130 "compare_and_write": false, 00:10:58.130 "abort": true, 00:10:58.130 "seek_hole": false, 00:10:58.130 "seek_data": false, 00:10:58.130 "copy": true, 00:10:58.130 "nvme_iov_md": false 00:10:58.130 }, 00:10:58.130 "memory_domains": [ 00:10:58.130 { 00:10:58.130 "dma_device_id": "system", 00:10:58.130 "dma_device_type": 1 00:10:58.130 }, 00:10:58.130 { 00:10:58.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.130 "dma_device_type": 2 00:10:58.130 } 00:10:58.130 ], 00:10:58.130 "driver_specific": {} 00:10:58.130 } 00:10:58.130 ] 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.130 "name": "Existed_Raid", 00:10:58.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.130 "strip_size_kb": 0, 00:10:58.130 "state": "configuring", 00:10:58.130 "raid_level": "raid1", 00:10:58.130 "superblock": false, 00:10:58.130 "num_base_bdevs": 4, 00:10:58.130 "num_base_bdevs_discovered": 3, 00:10:58.130 "num_base_bdevs_operational": 4, 00:10:58.130 "base_bdevs_list": [ 00:10:58.130 { 00:10:58.130 "name": "BaseBdev1", 00:10:58.130 "uuid": "2f0fac0f-e132-4026-8fac-7c1209591255", 00:10:58.130 "is_configured": true, 00:10:58.130 "data_offset": 0, 00:10:58.130 "data_size": 65536 00:10:58.130 }, 00:10:58.130 { 00:10:58.130 "name": null, 00:10:58.130 "uuid": "89c5b86d-8769-44b6-bfb5-8f04ae355168", 00:10:58.130 "is_configured": false, 00:10:58.130 "data_offset": 0, 00:10:58.130 "data_size": 65536 00:10:58.130 }, 00:10:58.130 { 00:10:58.130 "name": "BaseBdev3", 00:10:58.130 "uuid": "5540abb7-1591-4ed6-aa97-8536e314c49b", 00:10:58.130 "is_configured": true, 00:10:58.130 "data_offset": 0, 00:10:58.130 "data_size": 65536 00:10:58.130 }, 00:10:58.130 { 00:10:58.130 "name": "BaseBdev4", 00:10:58.130 "uuid": "0a891da2-da25-44b3-9268-bce8a9b1bdf9", 00:10:58.130 "is_configured": true, 00:10:58.130 "data_offset": 0, 00:10:58.130 "data_size": 65536 00:10:58.130 } 00:10:58.130 ] 00:10:58.130 }' 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.130 13:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.697 [2024-11-26 13:23:47.099840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.697 "name": "Existed_Raid", 00:10:58.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.697 "strip_size_kb": 0, 00:10:58.697 "state": "configuring", 00:10:58.697 "raid_level": "raid1", 00:10:58.697 "superblock": false, 00:10:58.697 "num_base_bdevs": 4, 00:10:58.697 "num_base_bdevs_discovered": 2, 00:10:58.697 "num_base_bdevs_operational": 4, 00:10:58.697 "base_bdevs_list": [ 00:10:58.697 { 00:10:58.697 "name": "BaseBdev1", 00:10:58.697 "uuid": "2f0fac0f-e132-4026-8fac-7c1209591255", 00:10:58.697 "is_configured": true, 00:10:58.697 "data_offset": 0, 00:10:58.697 "data_size": 65536 00:10:58.697 }, 00:10:58.697 { 00:10:58.697 "name": null, 00:10:58.697 "uuid": "89c5b86d-8769-44b6-bfb5-8f04ae355168", 00:10:58.697 "is_configured": false, 00:10:58.697 "data_offset": 0, 00:10:58.697 "data_size": 65536 00:10:58.697 }, 00:10:58.697 { 00:10:58.697 "name": null, 00:10:58.697 "uuid": "5540abb7-1591-4ed6-aa97-8536e314c49b", 00:10:58.697 "is_configured": false, 00:10:58.697 "data_offset": 0, 00:10:58.697 "data_size": 65536 00:10:58.697 }, 00:10:58.697 { 00:10:58.697 "name": "BaseBdev4", 00:10:58.697 "uuid": "0a891da2-da25-44b3-9268-bce8a9b1bdf9", 00:10:58.697 "is_configured": true, 00:10:58.697 "data_offset": 0, 00:10:58.697 "data_size": 65536 00:10:58.697 } 00:10:58.697 ] 00:10:58.697 }' 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.697 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.265 [2024-11-26 13:23:47.683946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.265 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.265 "name": "Existed_Raid", 00:10:59.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.265 "strip_size_kb": 0, 00:10:59.265 "state": "configuring", 00:10:59.265 "raid_level": "raid1", 00:10:59.265 "superblock": false, 00:10:59.265 "num_base_bdevs": 4, 00:10:59.265 "num_base_bdevs_discovered": 3, 00:10:59.265 "num_base_bdevs_operational": 4, 00:10:59.265 "base_bdevs_list": [ 00:10:59.265 { 00:10:59.265 "name": "BaseBdev1", 00:10:59.265 "uuid": "2f0fac0f-e132-4026-8fac-7c1209591255", 00:10:59.265 "is_configured": true, 00:10:59.265 "data_offset": 0, 00:10:59.265 "data_size": 65536 00:10:59.265 }, 00:10:59.265 { 00:10:59.265 "name": null, 00:10:59.265 "uuid": "89c5b86d-8769-44b6-bfb5-8f04ae355168", 00:10:59.265 "is_configured": false, 00:10:59.265 "data_offset": 0, 00:10:59.265 "data_size": 65536 00:10:59.265 }, 00:10:59.265 { 00:10:59.265 "name": "BaseBdev3", 00:10:59.265 "uuid": "5540abb7-1591-4ed6-aa97-8536e314c49b", 00:10:59.266 "is_configured": true, 00:10:59.266 "data_offset": 0, 00:10:59.266 "data_size": 65536 00:10:59.266 }, 00:10:59.266 { 00:10:59.266 "name": "BaseBdev4", 00:10:59.266 "uuid": "0a891da2-da25-44b3-9268-bce8a9b1bdf9", 00:10:59.266 "is_configured": true, 00:10:59.266 "data_offset": 0, 00:10:59.266 "data_size": 65536 00:10:59.266 } 00:10:59.266 ] 00:10:59.266 }' 00:10:59.266 13:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.266 13:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.834 [2024-11-26 13:23:48.256092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.834 "name": "Existed_Raid", 00:10:59.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.834 "strip_size_kb": 0, 00:10:59.834 "state": "configuring", 00:10:59.834 "raid_level": "raid1", 00:10:59.834 "superblock": false, 00:10:59.834 "num_base_bdevs": 4, 00:10:59.834 "num_base_bdevs_discovered": 2, 00:10:59.834 "num_base_bdevs_operational": 4, 00:10:59.834 "base_bdevs_list": [ 00:10:59.834 { 00:10:59.834 "name": null, 00:10:59.834 "uuid": "2f0fac0f-e132-4026-8fac-7c1209591255", 00:10:59.834 "is_configured": false, 00:10:59.834 "data_offset": 0, 00:10:59.834 "data_size": 65536 00:10:59.834 }, 00:10:59.834 { 00:10:59.834 "name": null, 00:10:59.834 "uuid": "89c5b86d-8769-44b6-bfb5-8f04ae355168", 00:10:59.834 "is_configured": false, 00:10:59.834 "data_offset": 0, 00:10:59.834 "data_size": 65536 00:10:59.834 }, 00:10:59.834 { 00:10:59.834 "name": "BaseBdev3", 00:10:59.834 "uuid": "5540abb7-1591-4ed6-aa97-8536e314c49b", 00:10:59.834 "is_configured": true, 00:10:59.834 "data_offset": 0, 00:10:59.834 "data_size": 65536 00:10:59.834 }, 00:10:59.834 { 00:10:59.834 "name": "BaseBdev4", 00:10:59.834 "uuid": "0a891da2-da25-44b3-9268-bce8a9b1bdf9", 00:10:59.834 "is_configured": true, 00:10:59.834 "data_offset": 0, 00:10:59.834 "data_size": 65536 00:10:59.834 } 00:10:59.834 ] 00:10:59.834 }' 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.834 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.404 [2024-11-26 13:23:48.903035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.404 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.405 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.405 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.405 "name": "Existed_Raid", 00:11:00.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.405 "strip_size_kb": 0, 00:11:00.405 "state": "configuring", 00:11:00.405 "raid_level": "raid1", 00:11:00.405 "superblock": false, 00:11:00.405 "num_base_bdevs": 4, 00:11:00.405 "num_base_bdevs_discovered": 3, 00:11:00.405 "num_base_bdevs_operational": 4, 00:11:00.405 "base_bdevs_list": [ 00:11:00.405 { 00:11:00.405 "name": null, 00:11:00.405 "uuid": "2f0fac0f-e132-4026-8fac-7c1209591255", 00:11:00.405 "is_configured": false, 00:11:00.405 "data_offset": 0, 00:11:00.405 "data_size": 65536 00:11:00.405 }, 00:11:00.405 { 00:11:00.405 "name": "BaseBdev2", 00:11:00.405 "uuid": "89c5b86d-8769-44b6-bfb5-8f04ae355168", 00:11:00.405 "is_configured": true, 00:11:00.405 "data_offset": 0, 00:11:00.405 "data_size": 65536 00:11:00.405 }, 00:11:00.405 { 00:11:00.405 "name": "BaseBdev3", 00:11:00.405 "uuid": "5540abb7-1591-4ed6-aa97-8536e314c49b", 00:11:00.405 "is_configured": true, 00:11:00.405 "data_offset": 0, 00:11:00.405 "data_size": 65536 00:11:00.405 }, 00:11:00.405 { 00:11:00.405 "name": "BaseBdev4", 00:11:00.405 "uuid": "0a891da2-da25-44b3-9268-bce8a9b1bdf9", 00:11:00.405 "is_configured": true, 00:11:00.405 "data_offset": 0, 00:11:00.405 "data_size": 65536 00:11:00.405 } 00:11:00.405 ] 00:11:00.405 }' 00:11:00.405 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.405 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.973 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.973 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.973 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.973 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:00.973 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.973 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:00.973 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.973 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.973 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:00.973 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.973 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.232 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2f0fac0f-e132-4026-8fac-7c1209591255 00:11:01.232 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.232 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.232 [2024-11-26 13:23:49.571305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:01.232 [2024-11-26 13:23:49.571353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:01.232 [2024-11-26 13:23:49.571367] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:01.232 [2024-11-26 13:23:49.571701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:01.233 [2024-11-26 13:23:49.571913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:01.233 [2024-11-26 13:23:49.571936] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:01.233 [2024-11-26 13:23:49.572237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.233 NewBaseBdev 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.233 [ 00:11:01.233 { 00:11:01.233 "name": "NewBaseBdev", 00:11:01.233 "aliases": [ 00:11:01.233 "2f0fac0f-e132-4026-8fac-7c1209591255" 00:11:01.233 ], 00:11:01.233 "product_name": "Malloc disk", 00:11:01.233 "block_size": 512, 00:11:01.233 "num_blocks": 65536, 00:11:01.233 "uuid": "2f0fac0f-e132-4026-8fac-7c1209591255", 00:11:01.233 "assigned_rate_limits": { 00:11:01.233 "rw_ios_per_sec": 0, 00:11:01.233 "rw_mbytes_per_sec": 0, 00:11:01.233 "r_mbytes_per_sec": 0, 00:11:01.233 "w_mbytes_per_sec": 0 00:11:01.233 }, 00:11:01.233 "claimed": true, 00:11:01.233 "claim_type": "exclusive_write", 00:11:01.233 "zoned": false, 00:11:01.233 "supported_io_types": { 00:11:01.233 "read": true, 00:11:01.233 "write": true, 00:11:01.233 "unmap": true, 00:11:01.233 "flush": true, 00:11:01.233 "reset": true, 00:11:01.233 "nvme_admin": false, 00:11:01.233 "nvme_io": false, 00:11:01.233 "nvme_io_md": false, 00:11:01.233 "write_zeroes": true, 00:11:01.233 "zcopy": true, 00:11:01.233 "get_zone_info": false, 00:11:01.233 "zone_management": false, 00:11:01.233 "zone_append": false, 00:11:01.233 "compare": false, 00:11:01.233 "compare_and_write": false, 00:11:01.233 "abort": true, 00:11:01.233 "seek_hole": false, 00:11:01.233 "seek_data": false, 00:11:01.233 "copy": true, 00:11:01.233 "nvme_iov_md": false 00:11:01.233 }, 00:11:01.233 "memory_domains": [ 00:11:01.233 { 00:11:01.233 "dma_device_id": "system", 00:11:01.233 "dma_device_type": 1 00:11:01.233 }, 00:11:01.233 { 00:11:01.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.233 "dma_device_type": 2 00:11:01.233 } 00:11:01.233 ], 00:11:01.233 "driver_specific": {} 00:11:01.233 } 00:11:01.233 ] 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.233 "name": "Existed_Raid", 00:11:01.233 "uuid": "a7e571f0-5e57-4bd1-9d81-96384071ce72", 00:11:01.233 "strip_size_kb": 0, 00:11:01.233 "state": "online", 00:11:01.233 "raid_level": "raid1", 00:11:01.233 "superblock": false, 00:11:01.233 "num_base_bdevs": 4, 00:11:01.233 "num_base_bdevs_discovered": 4, 00:11:01.233 "num_base_bdevs_operational": 4, 00:11:01.233 "base_bdevs_list": [ 00:11:01.233 { 00:11:01.233 "name": "NewBaseBdev", 00:11:01.233 "uuid": "2f0fac0f-e132-4026-8fac-7c1209591255", 00:11:01.233 "is_configured": true, 00:11:01.233 "data_offset": 0, 00:11:01.233 "data_size": 65536 00:11:01.233 }, 00:11:01.233 { 00:11:01.233 "name": "BaseBdev2", 00:11:01.233 "uuid": "89c5b86d-8769-44b6-bfb5-8f04ae355168", 00:11:01.233 "is_configured": true, 00:11:01.233 "data_offset": 0, 00:11:01.233 "data_size": 65536 00:11:01.233 }, 00:11:01.233 { 00:11:01.233 "name": "BaseBdev3", 00:11:01.233 "uuid": "5540abb7-1591-4ed6-aa97-8536e314c49b", 00:11:01.233 "is_configured": true, 00:11:01.233 "data_offset": 0, 00:11:01.233 "data_size": 65536 00:11:01.233 }, 00:11:01.233 { 00:11:01.233 "name": "BaseBdev4", 00:11:01.233 "uuid": "0a891da2-da25-44b3-9268-bce8a9b1bdf9", 00:11:01.233 "is_configured": true, 00:11:01.233 "data_offset": 0, 00:11:01.233 "data_size": 65536 00:11:01.233 } 00:11:01.233 ] 00:11:01.233 }' 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.233 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.803 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:01.803 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:01.803 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.803 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.803 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.803 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.803 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:01.803 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.803 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.803 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.803 [2024-11-26 13:23:50.123845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.803 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.803 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.803 "name": "Existed_Raid", 00:11:01.803 "aliases": [ 00:11:01.803 "a7e571f0-5e57-4bd1-9d81-96384071ce72" 00:11:01.803 ], 00:11:01.803 "product_name": "Raid Volume", 00:11:01.803 "block_size": 512, 00:11:01.803 "num_blocks": 65536, 00:11:01.803 "uuid": "a7e571f0-5e57-4bd1-9d81-96384071ce72", 00:11:01.803 "assigned_rate_limits": { 00:11:01.803 "rw_ios_per_sec": 0, 00:11:01.803 "rw_mbytes_per_sec": 0, 00:11:01.803 "r_mbytes_per_sec": 0, 00:11:01.803 "w_mbytes_per_sec": 0 00:11:01.803 }, 00:11:01.803 "claimed": false, 00:11:01.803 "zoned": false, 00:11:01.803 "supported_io_types": { 00:11:01.803 "read": true, 00:11:01.803 "write": true, 00:11:01.803 "unmap": false, 00:11:01.803 "flush": false, 00:11:01.803 "reset": true, 00:11:01.803 "nvme_admin": false, 00:11:01.803 "nvme_io": false, 00:11:01.803 "nvme_io_md": false, 00:11:01.803 "write_zeroes": true, 00:11:01.803 "zcopy": false, 00:11:01.803 "get_zone_info": false, 00:11:01.803 "zone_management": false, 00:11:01.803 "zone_append": false, 00:11:01.803 "compare": false, 00:11:01.803 "compare_and_write": false, 00:11:01.803 "abort": false, 00:11:01.803 "seek_hole": false, 00:11:01.803 "seek_data": false, 00:11:01.803 "copy": false, 00:11:01.803 "nvme_iov_md": false 00:11:01.803 }, 00:11:01.803 "memory_domains": [ 00:11:01.803 { 00:11:01.803 "dma_device_id": "system", 00:11:01.803 "dma_device_type": 1 00:11:01.803 }, 00:11:01.803 { 00:11:01.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.804 "dma_device_type": 2 00:11:01.804 }, 00:11:01.804 { 00:11:01.804 "dma_device_id": "system", 00:11:01.804 "dma_device_type": 1 00:11:01.804 }, 00:11:01.804 { 00:11:01.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.804 "dma_device_type": 2 00:11:01.804 }, 00:11:01.804 { 00:11:01.804 "dma_device_id": "system", 00:11:01.804 "dma_device_type": 1 00:11:01.804 }, 00:11:01.804 { 00:11:01.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.804 "dma_device_type": 2 00:11:01.804 }, 00:11:01.804 { 00:11:01.804 "dma_device_id": "system", 00:11:01.804 "dma_device_type": 1 00:11:01.804 }, 00:11:01.804 { 00:11:01.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.804 "dma_device_type": 2 00:11:01.804 } 00:11:01.804 ], 00:11:01.804 "driver_specific": { 00:11:01.804 "raid": { 00:11:01.804 "uuid": "a7e571f0-5e57-4bd1-9d81-96384071ce72", 00:11:01.804 "strip_size_kb": 0, 00:11:01.804 "state": "online", 00:11:01.804 "raid_level": "raid1", 00:11:01.804 "superblock": false, 00:11:01.804 "num_base_bdevs": 4, 00:11:01.804 "num_base_bdevs_discovered": 4, 00:11:01.804 "num_base_bdevs_operational": 4, 00:11:01.804 "base_bdevs_list": [ 00:11:01.804 { 00:11:01.804 "name": "NewBaseBdev", 00:11:01.804 "uuid": "2f0fac0f-e132-4026-8fac-7c1209591255", 00:11:01.804 "is_configured": true, 00:11:01.804 "data_offset": 0, 00:11:01.804 "data_size": 65536 00:11:01.804 }, 00:11:01.804 { 00:11:01.804 "name": "BaseBdev2", 00:11:01.804 "uuid": "89c5b86d-8769-44b6-bfb5-8f04ae355168", 00:11:01.804 "is_configured": true, 00:11:01.804 "data_offset": 0, 00:11:01.804 "data_size": 65536 00:11:01.804 }, 00:11:01.804 { 00:11:01.804 "name": "BaseBdev3", 00:11:01.804 "uuid": "5540abb7-1591-4ed6-aa97-8536e314c49b", 00:11:01.804 "is_configured": true, 00:11:01.804 "data_offset": 0, 00:11:01.804 "data_size": 65536 00:11:01.804 }, 00:11:01.804 { 00:11:01.804 "name": "BaseBdev4", 00:11:01.804 "uuid": "0a891da2-da25-44b3-9268-bce8a9b1bdf9", 00:11:01.804 "is_configured": true, 00:11:01.804 "data_offset": 0, 00:11:01.804 "data_size": 65536 00:11:01.804 } 00:11:01.804 ] 00:11:01.804 } 00:11:01.804 } 00:11:01.804 }' 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:01.804 BaseBdev2 00:11:01.804 BaseBdev3 00:11:01.804 BaseBdev4' 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.804 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.064 [2024-11-26 13:23:50.491569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.064 [2024-11-26 13:23:50.491611] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.064 [2024-11-26 13:23:50.491701] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.064 [2024-11-26 13:23:50.492050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.064 [2024-11-26 13:23:50.492080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72727 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72727 ']' 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72727 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72727 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.064 killing process with pid 72727 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72727' 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72727 00:11:02.064 [2024-11-26 13:23:50.526635] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.064 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72727 00:11:02.324 [2024-11-26 13:23:50.806363] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.262 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:03.262 00:11:03.262 real 0m12.355s 00:11:03.262 user 0m20.830s 00:11:03.262 sys 0m1.713s 00:11:03.262 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.262 ************************************ 00:11:03.262 END TEST raid_state_function_test 00:11:03.262 ************************************ 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.263 13:23:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:03.263 13:23:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:03.263 13:23:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.263 13:23:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.263 ************************************ 00:11:03.263 START TEST raid_state_function_test_sb 00:11:03.263 ************************************ 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73410 00:11:03.263 Process raid pid: 73410 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73410' 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73410 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73410 ']' 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.263 13:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.522 [2024-11-26 13:23:51.893719] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:11:03.522 [2024-11-26 13:23:51.893897] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.522 [2024-11-26 13:23:52.077187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.782 [2024-11-26 13:23:52.191217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.041 [2024-11-26 13:23:52.385278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.041 [2024-11-26 13:23:52.385330] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.299 [2024-11-26 13:23:52.807431] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.299 [2024-11-26 13:23:52.807494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.299 [2024-11-26 13:23:52.807510] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.299 [2024-11-26 13:23:52.807525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.299 [2024-11-26 13:23:52.807533] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.299 [2024-11-26 13:23:52.807546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.299 [2024-11-26 13:23:52.807553] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:04.299 [2024-11-26 13:23:52.807565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.299 13:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.558 13:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.558 "name": "Existed_Raid", 00:11:04.558 "uuid": "eb743ab0-3954-4d52-962f-84d7d79bcde5", 00:11:04.558 "strip_size_kb": 0, 00:11:04.558 "state": "configuring", 00:11:04.558 "raid_level": "raid1", 00:11:04.558 "superblock": true, 00:11:04.558 "num_base_bdevs": 4, 00:11:04.558 "num_base_bdevs_discovered": 0, 00:11:04.558 "num_base_bdevs_operational": 4, 00:11:04.558 "base_bdevs_list": [ 00:11:04.558 { 00:11:04.558 "name": "BaseBdev1", 00:11:04.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.558 "is_configured": false, 00:11:04.558 "data_offset": 0, 00:11:04.558 "data_size": 0 00:11:04.558 }, 00:11:04.558 { 00:11:04.558 "name": "BaseBdev2", 00:11:04.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.558 "is_configured": false, 00:11:04.558 "data_offset": 0, 00:11:04.558 "data_size": 0 00:11:04.558 }, 00:11:04.558 { 00:11:04.558 "name": "BaseBdev3", 00:11:04.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.558 "is_configured": false, 00:11:04.558 "data_offset": 0, 00:11:04.558 "data_size": 0 00:11:04.558 }, 00:11:04.558 { 00:11:04.558 "name": "BaseBdev4", 00:11:04.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.558 "is_configured": false, 00:11:04.558 "data_offset": 0, 00:11:04.558 "data_size": 0 00:11:04.558 } 00:11:04.558 ] 00:11:04.558 }' 00:11:04.558 13:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.558 13:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.818 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.818 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.818 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.818 [2024-11-26 13:23:53.331458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.818 [2024-11-26 13:23:53.331501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:04.818 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.818 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.818 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.818 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.818 [2024-11-26 13:23:53.343460] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.818 [2024-11-26 13:23:53.343495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.818 [2024-11-26 13:23:53.343507] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.818 [2024-11-26 13:23:53.343520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.818 [2024-11-26 13:23:53.343529] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.818 [2024-11-26 13:23:53.343540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.818 [2024-11-26 13:23:53.343548] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:04.818 [2024-11-26 13:23:53.343560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:04.818 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.818 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:04.818 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.818 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.077 [2024-11-26 13:23:53.391665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.077 BaseBdev1 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.077 [ 00:11:05.077 { 00:11:05.077 "name": "BaseBdev1", 00:11:05.077 "aliases": [ 00:11:05.077 "064f1e9e-6417-46e5-b766-e65451eab1f3" 00:11:05.077 ], 00:11:05.077 "product_name": "Malloc disk", 00:11:05.077 "block_size": 512, 00:11:05.077 "num_blocks": 65536, 00:11:05.077 "uuid": "064f1e9e-6417-46e5-b766-e65451eab1f3", 00:11:05.077 "assigned_rate_limits": { 00:11:05.077 "rw_ios_per_sec": 0, 00:11:05.077 "rw_mbytes_per_sec": 0, 00:11:05.077 "r_mbytes_per_sec": 0, 00:11:05.077 "w_mbytes_per_sec": 0 00:11:05.077 }, 00:11:05.077 "claimed": true, 00:11:05.077 "claim_type": "exclusive_write", 00:11:05.077 "zoned": false, 00:11:05.077 "supported_io_types": { 00:11:05.077 "read": true, 00:11:05.077 "write": true, 00:11:05.077 "unmap": true, 00:11:05.077 "flush": true, 00:11:05.077 "reset": true, 00:11:05.077 "nvme_admin": false, 00:11:05.077 "nvme_io": false, 00:11:05.077 "nvme_io_md": false, 00:11:05.077 "write_zeroes": true, 00:11:05.077 "zcopy": true, 00:11:05.077 "get_zone_info": false, 00:11:05.077 "zone_management": false, 00:11:05.077 "zone_append": false, 00:11:05.077 "compare": false, 00:11:05.077 "compare_and_write": false, 00:11:05.077 "abort": true, 00:11:05.077 "seek_hole": false, 00:11:05.077 "seek_data": false, 00:11:05.077 "copy": true, 00:11:05.077 "nvme_iov_md": false 00:11:05.077 }, 00:11:05.077 "memory_domains": [ 00:11:05.077 { 00:11:05.077 "dma_device_id": "system", 00:11:05.077 "dma_device_type": 1 00:11:05.077 }, 00:11:05.077 { 00:11:05.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.077 "dma_device_type": 2 00:11:05.077 } 00:11:05.077 ], 00:11:05.077 "driver_specific": {} 00:11:05.077 } 00:11:05.077 ] 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.077 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.078 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.078 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.078 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.078 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.078 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.078 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.078 "name": "Existed_Raid", 00:11:05.078 "uuid": "5823d3ea-099d-4a9a-8288-7950724bc7de", 00:11:05.078 "strip_size_kb": 0, 00:11:05.078 "state": "configuring", 00:11:05.078 "raid_level": "raid1", 00:11:05.078 "superblock": true, 00:11:05.078 "num_base_bdevs": 4, 00:11:05.078 "num_base_bdevs_discovered": 1, 00:11:05.078 "num_base_bdevs_operational": 4, 00:11:05.078 "base_bdevs_list": [ 00:11:05.078 { 00:11:05.078 "name": "BaseBdev1", 00:11:05.078 "uuid": "064f1e9e-6417-46e5-b766-e65451eab1f3", 00:11:05.078 "is_configured": true, 00:11:05.078 "data_offset": 2048, 00:11:05.078 "data_size": 63488 00:11:05.078 }, 00:11:05.078 { 00:11:05.078 "name": "BaseBdev2", 00:11:05.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.078 "is_configured": false, 00:11:05.078 "data_offset": 0, 00:11:05.078 "data_size": 0 00:11:05.078 }, 00:11:05.078 { 00:11:05.078 "name": "BaseBdev3", 00:11:05.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.078 "is_configured": false, 00:11:05.078 "data_offset": 0, 00:11:05.078 "data_size": 0 00:11:05.078 }, 00:11:05.078 { 00:11:05.078 "name": "BaseBdev4", 00:11:05.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.078 "is_configured": false, 00:11:05.078 "data_offset": 0, 00:11:05.078 "data_size": 0 00:11:05.078 } 00:11:05.078 ] 00:11:05.078 }' 00:11:05.078 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.078 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.646 [2024-11-26 13:23:53.927786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.646 [2024-11-26 13:23:53.927829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.646 [2024-11-26 13:23:53.935855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.646 [2024-11-26 13:23:53.938115] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.646 [2024-11-26 13:23:53.938175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.646 [2024-11-26 13:23:53.938189] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.646 [2024-11-26 13:23:53.938204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.646 [2024-11-26 13:23:53.938213] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:05.646 [2024-11-26 13:23:53.938225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.646 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.646 "name": "Existed_Raid", 00:11:05.646 "uuid": "3252260e-3b1f-4b5c-bede-c424049a2625", 00:11:05.646 "strip_size_kb": 0, 00:11:05.646 "state": "configuring", 00:11:05.646 "raid_level": "raid1", 00:11:05.646 "superblock": true, 00:11:05.646 "num_base_bdevs": 4, 00:11:05.646 "num_base_bdevs_discovered": 1, 00:11:05.646 "num_base_bdevs_operational": 4, 00:11:05.646 "base_bdevs_list": [ 00:11:05.646 { 00:11:05.646 "name": "BaseBdev1", 00:11:05.646 "uuid": "064f1e9e-6417-46e5-b766-e65451eab1f3", 00:11:05.646 "is_configured": true, 00:11:05.646 "data_offset": 2048, 00:11:05.646 "data_size": 63488 00:11:05.646 }, 00:11:05.646 { 00:11:05.646 "name": "BaseBdev2", 00:11:05.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.646 "is_configured": false, 00:11:05.646 "data_offset": 0, 00:11:05.646 "data_size": 0 00:11:05.646 }, 00:11:05.646 { 00:11:05.646 "name": "BaseBdev3", 00:11:05.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.646 "is_configured": false, 00:11:05.646 "data_offset": 0, 00:11:05.647 "data_size": 0 00:11:05.647 }, 00:11:05.647 { 00:11:05.647 "name": "BaseBdev4", 00:11:05.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.647 "is_configured": false, 00:11:05.647 "data_offset": 0, 00:11:05.647 "data_size": 0 00:11:05.647 } 00:11:05.647 ] 00:11:05.647 }' 00:11:05.647 13:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.647 13:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.906 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.906 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.906 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.165 [2024-11-26 13:23:54.504738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.165 BaseBdev2 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.165 [ 00:11:06.165 { 00:11:06.165 "name": "BaseBdev2", 00:11:06.165 "aliases": [ 00:11:06.165 "1a83db85-dd2f-480a-8b64-5bb46007e272" 00:11:06.165 ], 00:11:06.165 "product_name": "Malloc disk", 00:11:06.165 "block_size": 512, 00:11:06.165 "num_blocks": 65536, 00:11:06.165 "uuid": "1a83db85-dd2f-480a-8b64-5bb46007e272", 00:11:06.165 "assigned_rate_limits": { 00:11:06.165 "rw_ios_per_sec": 0, 00:11:06.165 "rw_mbytes_per_sec": 0, 00:11:06.165 "r_mbytes_per_sec": 0, 00:11:06.165 "w_mbytes_per_sec": 0 00:11:06.165 }, 00:11:06.165 "claimed": true, 00:11:06.165 "claim_type": "exclusive_write", 00:11:06.165 "zoned": false, 00:11:06.165 "supported_io_types": { 00:11:06.165 "read": true, 00:11:06.165 "write": true, 00:11:06.165 "unmap": true, 00:11:06.165 "flush": true, 00:11:06.165 "reset": true, 00:11:06.165 "nvme_admin": false, 00:11:06.165 "nvme_io": false, 00:11:06.165 "nvme_io_md": false, 00:11:06.165 "write_zeroes": true, 00:11:06.165 "zcopy": true, 00:11:06.165 "get_zone_info": false, 00:11:06.165 "zone_management": false, 00:11:06.165 "zone_append": false, 00:11:06.165 "compare": false, 00:11:06.165 "compare_and_write": false, 00:11:06.165 "abort": true, 00:11:06.165 "seek_hole": false, 00:11:06.165 "seek_data": false, 00:11:06.165 "copy": true, 00:11:06.165 "nvme_iov_md": false 00:11:06.165 }, 00:11:06.165 "memory_domains": [ 00:11:06.165 { 00:11:06.165 "dma_device_id": "system", 00:11:06.165 "dma_device_type": 1 00:11:06.165 }, 00:11:06.165 { 00:11:06.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.165 "dma_device_type": 2 00:11:06.165 } 00:11:06.165 ], 00:11:06.165 "driver_specific": {} 00:11:06.165 } 00:11:06.165 ] 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.165 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.166 "name": "Existed_Raid", 00:11:06.166 "uuid": "3252260e-3b1f-4b5c-bede-c424049a2625", 00:11:06.166 "strip_size_kb": 0, 00:11:06.166 "state": "configuring", 00:11:06.166 "raid_level": "raid1", 00:11:06.166 "superblock": true, 00:11:06.166 "num_base_bdevs": 4, 00:11:06.166 "num_base_bdevs_discovered": 2, 00:11:06.166 "num_base_bdevs_operational": 4, 00:11:06.166 "base_bdevs_list": [ 00:11:06.166 { 00:11:06.166 "name": "BaseBdev1", 00:11:06.166 "uuid": "064f1e9e-6417-46e5-b766-e65451eab1f3", 00:11:06.166 "is_configured": true, 00:11:06.166 "data_offset": 2048, 00:11:06.166 "data_size": 63488 00:11:06.166 }, 00:11:06.166 { 00:11:06.166 "name": "BaseBdev2", 00:11:06.166 "uuid": "1a83db85-dd2f-480a-8b64-5bb46007e272", 00:11:06.166 "is_configured": true, 00:11:06.166 "data_offset": 2048, 00:11:06.166 "data_size": 63488 00:11:06.166 }, 00:11:06.166 { 00:11:06.166 "name": "BaseBdev3", 00:11:06.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.166 "is_configured": false, 00:11:06.166 "data_offset": 0, 00:11:06.166 "data_size": 0 00:11:06.166 }, 00:11:06.166 { 00:11:06.166 "name": "BaseBdev4", 00:11:06.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.166 "is_configured": false, 00:11:06.166 "data_offset": 0, 00:11:06.166 "data_size": 0 00:11:06.166 } 00:11:06.166 ] 00:11:06.166 }' 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.166 13:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.734 [2024-11-26 13:23:55.106553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.734 BaseBdev3 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.734 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.734 [ 00:11:06.734 { 00:11:06.734 "name": "BaseBdev3", 00:11:06.734 "aliases": [ 00:11:06.734 "58640008-b531-43fd-9e2c-b74c28153db0" 00:11:06.734 ], 00:11:06.734 "product_name": "Malloc disk", 00:11:06.734 "block_size": 512, 00:11:06.734 "num_blocks": 65536, 00:11:06.734 "uuid": "58640008-b531-43fd-9e2c-b74c28153db0", 00:11:06.734 "assigned_rate_limits": { 00:11:06.734 "rw_ios_per_sec": 0, 00:11:06.734 "rw_mbytes_per_sec": 0, 00:11:06.734 "r_mbytes_per_sec": 0, 00:11:06.734 "w_mbytes_per_sec": 0 00:11:06.734 }, 00:11:06.734 "claimed": true, 00:11:06.734 "claim_type": "exclusive_write", 00:11:06.734 "zoned": false, 00:11:06.734 "supported_io_types": { 00:11:06.734 "read": true, 00:11:06.734 "write": true, 00:11:06.734 "unmap": true, 00:11:06.734 "flush": true, 00:11:06.734 "reset": true, 00:11:06.734 "nvme_admin": false, 00:11:06.734 "nvme_io": false, 00:11:06.734 "nvme_io_md": false, 00:11:06.734 "write_zeroes": true, 00:11:06.734 "zcopy": true, 00:11:06.734 "get_zone_info": false, 00:11:06.734 "zone_management": false, 00:11:06.734 "zone_append": false, 00:11:06.734 "compare": false, 00:11:06.734 "compare_and_write": false, 00:11:06.734 "abort": true, 00:11:06.734 "seek_hole": false, 00:11:06.734 "seek_data": false, 00:11:06.734 "copy": true, 00:11:06.734 "nvme_iov_md": false 00:11:06.734 }, 00:11:06.734 "memory_domains": [ 00:11:06.734 { 00:11:06.734 "dma_device_id": "system", 00:11:06.734 "dma_device_type": 1 00:11:06.734 }, 00:11:06.734 { 00:11:06.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.735 "dma_device_type": 2 00:11:06.735 } 00:11:06.735 ], 00:11:06.735 "driver_specific": {} 00:11:06.735 } 00:11:06.735 ] 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.735 "name": "Existed_Raid", 00:11:06.735 "uuid": "3252260e-3b1f-4b5c-bede-c424049a2625", 00:11:06.735 "strip_size_kb": 0, 00:11:06.735 "state": "configuring", 00:11:06.735 "raid_level": "raid1", 00:11:06.735 "superblock": true, 00:11:06.735 "num_base_bdevs": 4, 00:11:06.735 "num_base_bdevs_discovered": 3, 00:11:06.735 "num_base_bdevs_operational": 4, 00:11:06.735 "base_bdevs_list": [ 00:11:06.735 { 00:11:06.735 "name": "BaseBdev1", 00:11:06.735 "uuid": "064f1e9e-6417-46e5-b766-e65451eab1f3", 00:11:06.735 "is_configured": true, 00:11:06.735 "data_offset": 2048, 00:11:06.735 "data_size": 63488 00:11:06.735 }, 00:11:06.735 { 00:11:06.735 "name": "BaseBdev2", 00:11:06.735 "uuid": "1a83db85-dd2f-480a-8b64-5bb46007e272", 00:11:06.735 "is_configured": true, 00:11:06.735 "data_offset": 2048, 00:11:06.735 "data_size": 63488 00:11:06.735 }, 00:11:06.735 { 00:11:06.735 "name": "BaseBdev3", 00:11:06.735 "uuid": "58640008-b531-43fd-9e2c-b74c28153db0", 00:11:06.735 "is_configured": true, 00:11:06.735 "data_offset": 2048, 00:11:06.735 "data_size": 63488 00:11:06.735 }, 00:11:06.735 { 00:11:06.735 "name": "BaseBdev4", 00:11:06.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.735 "is_configured": false, 00:11:06.735 "data_offset": 0, 00:11:06.735 "data_size": 0 00:11:06.735 } 00:11:06.735 ] 00:11:06.735 }' 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.735 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.303 [2024-11-26 13:23:55.694778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:07.303 [2024-11-26 13:23:55.695074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:07.303 [2024-11-26 13:23:55.695108] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:07.303 [2024-11-26 13:23:55.695455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:07.303 [2024-11-26 13:23:55.695678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:07.303 [2024-11-26 13:23:55.695698] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:07.303 BaseBdev4 00:11:07.303 [2024-11-26 13:23:55.695888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.303 [ 00:11:07.303 { 00:11:07.303 "name": "BaseBdev4", 00:11:07.303 "aliases": [ 00:11:07.303 "a3876f13-e1d7-4933-81e8-496ba93d1925" 00:11:07.303 ], 00:11:07.303 "product_name": "Malloc disk", 00:11:07.303 "block_size": 512, 00:11:07.303 "num_blocks": 65536, 00:11:07.303 "uuid": "a3876f13-e1d7-4933-81e8-496ba93d1925", 00:11:07.303 "assigned_rate_limits": { 00:11:07.303 "rw_ios_per_sec": 0, 00:11:07.303 "rw_mbytes_per_sec": 0, 00:11:07.303 "r_mbytes_per_sec": 0, 00:11:07.303 "w_mbytes_per_sec": 0 00:11:07.303 }, 00:11:07.303 "claimed": true, 00:11:07.303 "claim_type": "exclusive_write", 00:11:07.303 "zoned": false, 00:11:07.303 "supported_io_types": { 00:11:07.303 "read": true, 00:11:07.303 "write": true, 00:11:07.303 "unmap": true, 00:11:07.303 "flush": true, 00:11:07.303 "reset": true, 00:11:07.303 "nvme_admin": false, 00:11:07.303 "nvme_io": false, 00:11:07.303 "nvme_io_md": false, 00:11:07.303 "write_zeroes": true, 00:11:07.303 "zcopy": true, 00:11:07.303 "get_zone_info": false, 00:11:07.303 "zone_management": false, 00:11:07.303 "zone_append": false, 00:11:07.303 "compare": false, 00:11:07.303 "compare_and_write": false, 00:11:07.303 "abort": true, 00:11:07.303 "seek_hole": false, 00:11:07.303 "seek_data": false, 00:11:07.303 "copy": true, 00:11:07.303 "nvme_iov_md": false 00:11:07.303 }, 00:11:07.303 "memory_domains": [ 00:11:07.303 { 00:11:07.303 "dma_device_id": "system", 00:11:07.303 "dma_device_type": 1 00:11:07.303 }, 00:11:07.303 { 00:11:07.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.303 "dma_device_type": 2 00:11:07.303 } 00:11:07.303 ], 00:11:07.303 "driver_specific": {} 00:11:07.303 } 00:11:07.303 ] 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.303 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.303 "name": "Existed_Raid", 00:11:07.303 "uuid": "3252260e-3b1f-4b5c-bede-c424049a2625", 00:11:07.303 "strip_size_kb": 0, 00:11:07.303 "state": "online", 00:11:07.303 "raid_level": "raid1", 00:11:07.303 "superblock": true, 00:11:07.303 "num_base_bdevs": 4, 00:11:07.303 "num_base_bdevs_discovered": 4, 00:11:07.303 "num_base_bdevs_operational": 4, 00:11:07.303 "base_bdevs_list": [ 00:11:07.303 { 00:11:07.303 "name": "BaseBdev1", 00:11:07.303 "uuid": "064f1e9e-6417-46e5-b766-e65451eab1f3", 00:11:07.303 "is_configured": true, 00:11:07.303 "data_offset": 2048, 00:11:07.303 "data_size": 63488 00:11:07.303 }, 00:11:07.303 { 00:11:07.303 "name": "BaseBdev2", 00:11:07.303 "uuid": "1a83db85-dd2f-480a-8b64-5bb46007e272", 00:11:07.303 "is_configured": true, 00:11:07.303 "data_offset": 2048, 00:11:07.303 "data_size": 63488 00:11:07.303 }, 00:11:07.303 { 00:11:07.303 "name": "BaseBdev3", 00:11:07.303 "uuid": "58640008-b531-43fd-9e2c-b74c28153db0", 00:11:07.303 "is_configured": true, 00:11:07.303 "data_offset": 2048, 00:11:07.303 "data_size": 63488 00:11:07.303 }, 00:11:07.303 { 00:11:07.303 "name": "BaseBdev4", 00:11:07.303 "uuid": "a3876f13-e1d7-4933-81e8-496ba93d1925", 00:11:07.303 "is_configured": true, 00:11:07.303 "data_offset": 2048, 00:11:07.303 "data_size": 63488 00:11:07.303 } 00:11:07.303 ] 00:11:07.304 }' 00:11:07.304 13:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.304 13:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.873 [2024-11-26 13:23:56.263288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.873 "name": "Existed_Raid", 00:11:07.873 "aliases": [ 00:11:07.873 "3252260e-3b1f-4b5c-bede-c424049a2625" 00:11:07.873 ], 00:11:07.873 "product_name": "Raid Volume", 00:11:07.873 "block_size": 512, 00:11:07.873 "num_blocks": 63488, 00:11:07.873 "uuid": "3252260e-3b1f-4b5c-bede-c424049a2625", 00:11:07.873 "assigned_rate_limits": { 00:11:07.873 "rw_ios_per_sec": 0, 00:11:07.873 "rw_mbytes_per_sec": 0, 00:11:07.873 "r_mbytes_per_sec": 0, 00:11:07.873 "w_mbytes_per_sec": 0 00:11:07.873 }, 00:11:07.873 "claimed": false, 00:11:07.873 "zoned": false, 00:11:07.873 "supported_io_types": { 00:11:07.873 "read": true, 00:11:07.873 "write": true, 00:11:07.873 "unmap": false, 00:11:07.873 "flush": false, 00:11:07.873 "reset": true, 00:11:07.873 "nvme_admin": false, 00:11:07.873 "nvme_io": false, 00:11:07.873 "nvme_io_md": false, 00:11:07.873 "write_zeroes": true, 00:11:07.873 "zcopy": false, 00:11:07.873 "get_zone_info": false, 00:11:07.873 "zone_management": false, 00:11:07.873 "zone_append": false, 00:11:07.873 "compare": false, 00:11:07.873 "compare_and_write": false, 00:11:07.873 "abort": false, 00:11:07.873 "seek_hole": false, 00:11:07.873 "seek_data": false, 00:11:07.873 "copy": false, 00:11:07.873 "nvme_iov_md": false 00:11:07.873 }, 00:11:07.873 "memory_domains": [ 00:11:07.873 { 00:11:07.873 "dma_device_id": "system", 00:11:07.873 "dma_device_type": 1 00:11:07.873 }, 00:11:07.873 { 00:11:07.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.873 "dma_device_type": 2 00:11:07.873 }, 00:11:07.873 { 00:11:07.873 "dma_device_id": "system", 00:11:07.873 "dma_device_type": 1 00:11:07.873 }, 00:11:07.873 { 00:11:07.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.873 "dma_device_type": 2 00:11:07.873 }, 00:11:07.873 { 00:11:07.873 "dma_device_id": "system", 00:11:07.873 "dma_device_type": 1 00:11:07.873 }, 00:11:07.873 { 00:11:07.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.873 "dma_device_type": 2 00:11:07.873 }, 00:11:07.873 { 00:11:07.873 "dma_device_id": "system", 00:11:07.873 "dma_device_type": 1 00:11:07.873 }, 00:11:07.873 { 00:11:07.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.873 "dma_device_type": 2 00:11:07.873 } 00:11:07.873 ], 00:11:07.873 "driver_specific": { 00:11:07.873 "raid": { 00:11:07.873 "uuid": "3252260e-3b1f-4b5c-bede-c424049a2625", 00:11:07.873 "strip_size_kb": 0, 00:11:07.873 "state": "online", 00:11:07.873 "raid_level": "raid1", 00:11:07.873 "superblock": true, 00:11:07.873 "num_base_bdevs": 4, 00:11:07.873 "num_base_bdevs_discovered": 4, 00:11:07.873 "num_base_bdevs_operational": 4, 00:11:07.873 "base_bdevs_list": [ 00:11:07.873 { 00:11:07.873 "name": "BaseBdev1", 00:11:07.873 "uuid": "064f1e9e-6417-46e5-b766-e65451eab1f3", 00:11:07.873 "is_configured": true, 00:11:07.873 "data_offset": 2048, 00:11:07.873 "data_size": 63488 00:11:07.873 }, 00:11:07.873 { 00:11:07.873 "name": "BaseBdev2", 00:11:07.873 "uuid": "1a83db85-dd2f-480a-8b64-5bb46007e272", 00:11:07.873 "is_configured": true, 00:11:07.873 "data_offset": 2048, 00:11:07.873 "data_size": 63488 00:11:07.873 }, 00:11:07.873 { 00:11:07.873 "name": "BaseBdev3", 00:11:07.873 "uuid": "58640008-b531-43fd-9e2c-b74c28153db0", 00:11:07.873 "is_configured": true, 00:11:07.873 "data_offset": 2048, 00:11:07.873 "data_size": 63488 00:11:07.873 }, 00:11:07.873 { 00:11:07.873 "name": "BaseBdev4", 00:11:07.873 "uuid": "a3876f13-e1d7-4933-81e8-496ba93d1925", 00:11:07.873 "is_configured": true, 00:11:07.873 "data_offset": 2048, 00:11:07.873 "data_size": 63488 00:11:07.873 } 00:11:07.873 ] 00:11:07.873 } 00:11:07.873 } 00:11:07.873 }' 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:07.873 BaseBdev2 00:11:07.873 BaseBdev3 00:11:07.873 BaseBdev4' 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.873 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.133 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.133 [2024-11-26 13:23:56.631114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.393 "name": "Existed_Raid", 00:11:08.393 "uuid": "3252260e-3b1f-4b5c-bede-c424049a2625", 00:11:08.393 "strip_size_kb": 0, 00:11:08.393 "state": "online", 00:11:08.393 "raid_level": "raid1", 00:11:08.393 "superblock": true, 00:11:08.393 "num_base_bdevs": 4, 00:11:08.393 "num_base_bdevs_discovered": 3, 00:11:08.393 "num_base_bdevs_operational": 3, 00:11:08.393 "base_bdevs_list": [ 00:11:08.393 { 00:11:08.393 "name": null, 00:11:08.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.393 "is_configured": false, 00:11:08.393 "data_offset": 0, 00:11:08.393 "data_size": 63488 00:11:08.393 }, 00:11:08.393 { 00:11:08.393 "name": "BaseBdev2", 00:11:08.393 "uuid": "1a83db85-dd2f-480a-8b64-5bb46007e272", 00:11:08.393 "is_configured": true, 00:11:08.393 "data_offset": 2048, 00:11:08.393 "data_size": 63488 00:11:08.393 }, 00:11:08.393 { 00:11:08.393 "name": "BaseBdev3", 00:11:08.393 "uuid": "58640008-b531-43fd-9e2c-b74c28153db0", 00:11:08.393 "is_configured": true, 00:11:08.393 "data_offset": 2048, 00:11:08.393 "data_size": 63488 00:11:08.393 }, 00:11:08.393 { 00:11:08.393 "name": "BaseBdev4", 00:11:08.393 "uuid": "a3876f13-e1d7-4933-81e8-496ba93d1925", 00:11:08.393 "is_configured": true, 00:11:08.393 "data_offset": 2048, 00:11:08.393 "data_size": 63488 00:11:08.393 } 00:11:08.393 ] 00:11:08.393 }' 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.393 13:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.962 [2024-11-26 13:23:57.278931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.962 [2024-11-26 13:23:57.407424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.962 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.222 [2024-11-26 13:23:57.534537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:09.222 [2024-11-26 13:23:57.534663] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.222 [2024-11-26 13:23:57.602612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.222 [2024-11-26 13:23:57.602668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.222 [2024-11-26 13:23:57.602687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.222 BaseBdev2 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.222 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.223 [ 00:11:09.223 { 00:11:09.223 "name": "BaseBdev2", 00:11:09.223 "aliases": [ 00:11:09.223 "32df77e1-5a95-4ebd-a9de-844109ac223e" 00:11:09.223 ], 00:11:09.223 "product_name": "Malloc disk", 00:11:09.223 "block_size": 512, 00:11:09.223 "num_blocks": 65536, 00:11:09.223 "uuid": "32df77e1-5a95-4ebd-a9de-844109ac223e", 00:11:09.223 "assigned_rate_limits": { 00:11:09.223 "rw_ios_per_sec": 0, 00:11:09.223 "rw_mbytes_per_sec": 0, 00:11:09.223 "r_mbytes_per_sec": 0, 00:11:09.223 "w_mbytes_per_sec": 0 00:11:09.223 }, 00:11:09.223 "claimed": false, 00:11:09.223 "zoned": false, 00:11:09.223 "supported_io_types": { 00:11:09.223 "read": true, 00:11:09.223 "write": true, 00:11:09.223 "unmap": true, 00:11:09.223 "flush": true, 00:11:09.223 "reset": true, 00:11:09.223 "nvme_admin": false, 00:11:09.223 "nvme_io": false, 00:11:09.223 "nvme_io_md": false, 00:11:09.223 "write_zeroes": true, 00:11:09.223 "zcopy": true, 00:11:09.223 "get_zone_info": false, 00:11:09.223 "zone_management": false, 00:11:09.223 "zone_append": false, 00:11:09.223 "compare": false, 00:11:09.223 "compare_and_write": false, 00:11:09.223 "abort": true, 00:11:09.223 "seek_hole": false, 00:11:09.223 "seek_data": false, 00:11:09.223 "copy": true, 00:11:09.223 "nvme_iov_md": false 00:11:09.223 }, 00:11:09.223 "memory_domains": [ 00:11:09.223 { 00:11:09.223 "dma_device_id": "system", 00:11:09.223 "dma_device_type": 1 00:11:09.223 }, 00:11:09.223 { 00:11:09.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.223 "dma_device_type": 2 00:11:09.223 } 00:11:09.223 ], 00:11:09.223 "driver_specific": {} 00:11:09.223 } 00:11:09.223 ] 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.223 BaseBdev3 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.223 [ 00:11:09.223 { 00:11:09.223 "name": "BaseBdev3", 00:11:09.223 "aliases": [ 00:11:09.223 "0a69a457-1b21-4cba-81c0-c3e8eb1bab83" 00:11:09.223 ], 00:11:09.223 "product_name": "Malloc disk", 00:11:09.223 "block_size": 512, 00:11:09.223 "num_blocks": 65536, 00:11:09.223 "uuid": "0a69a457-1b21-4cba-81c0-c3e8eb1bab83", 00:11:09.223 "assigned_rate_limits": { 00:11:09.223 "rw_ios_per_sec": 0, 00:11:09.223 "rw_mbytes_per_sec": 0, 00:11:09.223 "r_mbytes_per_sec": 0, 00:11:09.223 "w_mbytes_per_sec": 0 00:11:09.223 }, 00:11:09.223 "claimed": false, 00:11:09.223 "zoned": false, 00:11:09.223 "supported_io_types": { 00:11:09.223 "read": true, 00:11:09.223 "write": true, 00:11:09.223 "unmap": true, 00:11:09.223 "flush": true, 00:11:09.223 "reset": true, 00:11:09.223 "nvme_admin": false, 00:11:09.223 "nvme_io": false, 00:11:09.223 "nvme_io_md": false, 00:11:09.223 "write_zeroes": true, 00:11:09.223 "zcopy": true, 00:11:09.223 "get_zone_info": false, 00:11:09.223 "zone_management": false, 00:11:09.223 "zone_append": false, 00:11:09.223 "compare": false, 00:11:09.223 "compare_and_write": false, 00:11:09.223 "abort": true, 00:11:09.223 "seek_hole": false, 00:11:09.223 "seek_data": false, 00:11:09.223 "copy": true, 00:11:09.223 "nvme_iov_md": false 00:11:09.223 }, 00:11:09.223 "memory_domains": [ 00:11:09.223 { 00:11:09.223 "dma_device_id": "system", 00:11:09.223 "dma_device_type": 1 00:11:09.223 }, 00:11:09.223 { 00:11:09.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.223 "dma_device_type": 2 00:11:09.223 } 00:11:09.223 ], 00:11:09.223 "driver_specific": {} 00:11:09.223 } 00:11:09.223 ] 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:09.223 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.483 BaseBdev4 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.483 [ 00:11:09.483 { 00:11:09.483 "name": "BaseBdev4", 00:11:09.483 "aliases": [ 00:11:09.483 "68dfc25e-dc5f-4c27-883f-1a99407fe44f" 00:11:09.483 ], 00:11:09.483 "product_name": "Malloc disk", 00:11:09.483 "block_size": 512, 00:11:09.483 "num_blocks": 65536, 00:11:09.483 "uuid": "68dfc25e-dc5f-4c27-883f-1a99407fe44f", 00:11:09.483 "assigned_rate_limits": { 00:11:09.483 "rw_ios_per_sec": 0, 00:11:09.483 "rw_mbytes_per_sec": 0, 00:11:09.483 "r_mbytes_per_sec": 0, 00:11:09.483 "w_mbytes_per_sec": 0 00:11:09.483 }, 00:11:09.483 "claimed": false, 00:11:09.483 "zoned": false, 00:11:09.483 "supported_io_types": { 00:11:09.483 "read": true, 00:11:09.483 "write": true, 00:11:09.483 "unmap": true, 00:11:09.483 "flush": true, 00:11:09.483 "reset": true, 00:11:09.483 "nvme_admin": false, 00:11:09.483 "nvme_io": false, 00:11:09.483 "nvme_io_md": false, 00:11:09.483 "write_zeroes": true, 00:11:09.483 "zcopy": true, 00:11:09.483 "get_zone_info": false, 00:11:09.483 "zone_management": false, 00:11:09.483 "zone_append": false, 00:11:09.483 "compare": false, 00:11:09.483 "compare_and_write": false, 00:11:09.483 "abort": true, 00:11:09.483 "seek_hole": false, 00:11:09.483 "seek_data": false, 00:11:09.483 "copy": true, 00:11:09.483 "nvme_iov_md": false 00:11:09.483 }, 00:11:09.483 "memory_domains": [ 00:11:09.483 { 00:11:09.483 "dma_device_id": "system", 00:11:09.483 "dma_device_type": 1 00:11:09.483 }, 00:11:09.483 { 00:11:09.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.483 "dma_device_type": 2 00:11:09.483 } 00:11:09.483 ], 00:11:09.483 "driver_specific": {} 00:11:09.483 } 00:11:09.483 ] 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.483 [2024-11-26 13:23:57.871184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.483 [2024-11-26 13:23:57.871256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.483 [2024-11-26 13:23:57.871283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.483 [2024-11-26 13:23:57.873463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.483 [2024-11-26 13:23:57.873522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.483 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.484 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.484 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.484 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.484 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.484 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.484 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.484 "name": "Existed_Raid", 00:11:09.484 "uuid": "b76e5e68-e103-4651-a36f-34354a656841", 00:11:09.484 "strip_size_kb": 0, 00:11:09.484 "state": "configuring", 00:11:09.484 "raid_level": "raid1", 00:11:09.484 "superblock": true, 00:11:09.484 "num_base_bdevs": 4, 00:11:09.484 "num_base_bdevs_discovered": 3, 00:11:09.484 "num_base_bdevs_operational": 4, 00:11:09.484 "base_bdevs_list": [ 00:11:09.484 { 00:11:09.484 "name": "BaseBdev1", 00:11:09.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.484 "is_configured": false, 00:11:09.484 "data_offset": 0, 00:11:09.484 "data_size": 0 00:11:09.484 }, 00:11:09.484 { 00:11:09.484 "name": "BaseBdev2", 00:11:09.484 "uuid": "32df77e1-5a95-4ebd-a9de-844109ac223e", 00:11:09.484 "is_configured": true, 00:11:09.484 "data_offset": 2048, 00:11:09.484 "data_size": 63488 00:11:09.484 }, 00:11:09.484 { 00:11:09.484 "name": "BaseBdev3", 00:11:09.484 "uuid": "0a69a457-1b21-4cba-81c0-c3e8eb1bab83", 00:11:09.484 "is_configured": true, 00:11:09.484 "data_offset": 2048, 00:11:09.484 "data_size": 63488 00:11:09.484 }, 00:11:09.484 { 00:11:09.484 "name": "BaseBdev4", 00:11:09.484 "uuid": "68dfc25e-dc5f-4c27-883f-1a99407fe44f", 00:11:09.484 "is_configured": true, 00:11:09.484 "data_offset": 2048, 00:11:09.484 "data_size": 63488 00:11:09.484 } 00:11:09.484 ] 00:11:09.484 }' 00:11:09.484 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.484 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.053 [2024-11-26 13:23:58.403291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.053 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.054 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.054 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.054 "name": "Existed_Raid", 00:11:10.054 "uuid": "b76e5e68-e103-4651-a36f-34354a656841", 00:11:10.054 "strip_size_kb": 0, 00:11:10.054 "state": "configuring", 00:11:10.054 "raid_level": "raid1", 00:11:10.054 "superblock": true, 00:11:10.054 "num_base_bdevs": 4, 00:11:10.054 "num_base_bdevs_discovered": 2, 00:11:10.054 "num_base_bdevs_operational": 4, 00:11:10.054 "base_bdevs_list": [ 00:11:10.054 { 00:11:10.054 "name": "BaseBdev1", 00:11:10.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.054 "is_configured": false, 00:11:10.054 "data_offset": 0, 00:11:10.054 "data_size": 0 00:11:10.054 }, 00:11:10.054 { 00:11:10.054 "name": null, 00:11:10.054 "uuid": "32df77e1-5a95-4ebd-a9de-844109ac223e", 00:11:10.054 "is_configured": false, 00:11:10.054 "data_offset": 0, 00:11:10.054 "data_size": 63488 00:11:10.054 }, 00:11:10.054 { 00:11:10.054 "name": "BaseBdev3", 00:11:10.054 "uuid": "0a69a457-1b21-4cba-81c0-c3e8eb1bab83", 00:11:10.054 "is_configured": true, 00:11:10.054 "data_offset": 2048, 00:11:10.054 "data_size": 63488 00:11:10.054 }, 00:11:10.054 { 00:11:10.054 "name": "BaseBdev4", 00:11:10.054 "uuid": "68dfc25e-dc5f-4c27-883f-1a99407fe44f", 00:11:10.054 "is_configured": true, 00:11:10.054 "data_offset": 2048, 00:11:10.054 "data_size": 63488 00:11:10.054 } 00:11:10.054 ] 00:11:10.054 }' 00:11:10.054 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.054 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.623 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.623 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.623 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.623 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:10.623 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.623 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:10.623 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:10.623 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.623 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.623 [2024-11-26 13:23:59.018774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.623 BaseBdev1 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.623 [ 00:11:10.623 { 00:11:10.623 "name": "BaseBdev1", 00:11:10.623 "aliases": [ 00:11:10.623 "aaf95b66-af3c-4afa-bc10-8857da9bca40" 00:11:10.623 ], 00:11:10.623 "product_name": "Malloc disk", 00:11:10.623 "block_size": 512, 00:11:10.623 "num_blocks": 65536, 00:11:10.623 "uuid": "aaf95b66-af3c-4afa-bc10-8857da9bca40", 00:11:10.623 "assigned_rate_limits": { 00:11:10.623 "rw_ios_per_sec": 0, 00:11:10.623 "rw_mbytes_per_sec": 0, 00:11:10.623 "r_mbytes_per_sec": 0, 00:11:10.623 "w_mbytes_per_sec": 0 00:11:10.623 }, 00:11:10.623 "claimed": true, 00:11:10.623 "claim_type": "exclusive_write", 00:11:10.623 "zoned": false, 00:11:10.623 "supported_io_types": { 00:11:10.623 "read": true, 00:11:10.623 "write": true, 00:11:10.623 "unmap": true, 00:11:10.623 "flush": true, 00:11:10.623 "reset": true, 00:11:10.623 "nvme_admin": false, 00:11:10.623 "nvme_io": false, 00:11:10.623 "nvme_io_md": false, 00:11:10.623 "write_zeroes": true, 00:11:10.623 "zcopy": true, 00:11:10.623 "get_zone_info": false, 00:11:10.623 "zone_management": false, 00:11:10.623 "zone_append": false, 00:11:10.623 "compare": false, 00:11:10.623 "compare_and_write": false, 00:11:10.623 "abort": true, 00:11:10.623 "seek_hole": false, 00:11:10.623 "seek_data": false, 00:11:10.623 "copy": true, 00:11:10.623 "nvme_iov_md": false 00:11:10.623 }, 00:11:10.623 "memory_domains": [ 00:11:10.623 { 00:11:10.623 "dma_device_id": "system", 00:11:10.623 "dma_device_type": 1 00:11:10.623 }, 00:11:10.623 { 00:11:10.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.623 "dma_device_type": 2 00:11:10.623 } 00:11:10.623 ], 00:11:10.623 "driver_specific": {} 00:11:10.623 } 00:11:10.623 ] 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.623 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.623 "name": "Existed_Raid", 00:11:10.623 "uuid": "b76e5e68-e103-4651-a36f-34354a656841", 00:11:10.623 "strip_size_kb": 0, 00:11:10.623 "state": "configuring", 00:11:10.623 "raid_level": "raid1", 00:11:10.623 "superblock": true, 00:11:10.623 "num_base_bdevs": 4, 00:11:10.623 "num_base_bdevs_discovered": 3, 00:11:10.623 "num_base_bdevs_operational": 4, 00:11:10.623 "base_bdevs_list": [ 00:11:10.623 { 00:11:10.623 "name": "BaseBdev1", 00:11:10.623 "uuid": "aaf95b66-af3c-4afa-bc10-8857da9bca40", 00:11:10.623 "is_configured": true, 00:11:10.623 "data_offset": 2048, 00:11:10.623 "data_size": 63488 00:11:10.623 }, 00:11:10.623 { 00:11:10.623 "name": null, 00:11:10.624 "uuid": "32df77e1-5a95-4ebd-a9de-844109ac223e", 00:11:10.624 "is_configured": false, 00:11:10.624 "data_offset": 0, 00:11:10.624 "data_size": 63488 00:11:10.624 }, 00:11:10.624 { 00:11:10.624 "name": "BaseBdev3", 00:11:10.624 "uuid": "0a69a457-1b21-4cba-81c0-c3e8eb1bab83", 00:11:10.624 "is_configured": true, 00:11:10.624 "data_offset": 2048, 00:11:10.624 "data_size": 63488 00:11:10.624 }, 00:11:10.624 { 00:11:10.624 "name": "BaseBdev4", 00:11:10.624 "uuid": "68dfc25e-dc5f-4c27-883f-1a99407fe44f", 00:11:10.624 "is_configured": true, 00:11:10.624 "data_offset": 2048, 00:11:10.624 "data_size": 63488 00:11:10.624 } 00:11:10.624 ] 00:11:10.624 }' 00:11:10.624 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.624 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.191 [2024-11-26 13:23:59.622951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.191 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.191 "name": "Existed_Raid", 00:11:11.191 "uuid": "b76e5e68-e103-4651-a36f-34354a656841", 00:11:11.191 "strip_size_kb": 0, 00:11:11.191 "state": "configuring", 00:11:11.191 "raid_level": "raid1", 00:11:11.191 "superblock": true, 00:11:11.191 "num_base_bdevs": 4, 00:11:11.191 "num_base_bdevs_discovered": 2, 00:11:11.191 "num_base_bdevs_operational": 4, 00:11:11.191 "base_bdevs_list": [ 00:11:11.191 { 00:11:11.191 "name": "BaseBdev1", 00:11:11.191 "uuid": "aaf95b66-af3c-4afa-bc10-8857da9bca40", 00:11:11.191 "is_configured": true, 00:11:11.191 "data_offset": 2048, 00:11:11.191 "data_size": 63488 00:11:11.191 }, 00:11:11.191 { 00:11:11.192 "name": null, 00:11:11.192 "uuid": "32df77e1-5a95-4ebd-a9de-844109ac223e", 00:11:11.192 "is_configured": false, 00:11:11.192 "data_offset": 0, 00:11:11.192 "data_size": 63488 00:11:11.192 }, 00:11:11.192 { 00:11:11.192 "name": null, 00:11:11.192 "uuid": "0a69a457-1b21-4cba-81c0-c3e8eb1bab83", 00:11:11.192 "is_configured": false, 00:11:11.192 "data_offset": 0, 00:11:11.192 "data_size": 63488 00:11:11.192 }, 00:11:11.192 { 00:11:11.192 "name": "BaseBdev4", 00:11:11.192 "uuid": "68dfc25e-dc5f-4c27-883f-1a99407fe44f", 00:11:11.192 "is_configured": true, 00:11:11.192 "data_offset": 2048, 00:11:11.192 "data_size": 63488 00:11:11.192 } 00:11:11.192 ] 00:11:11.192 }' 00:11:11.192 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.192 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.760 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.760 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.760 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.760 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.760 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.761 [2024-11-26 13:24:00.191082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.761 "name": "Existed_Raid", 00:11:11.761 "uuid": "b76e5e68-e103-4651-a36f-34354a656841", 00:11:11.761 "strip_size_kb": 0, 00:11:11.761 "state": "configuring", 00:11:11.761 "raid_level": "raid1", 00:11:11.761 "superblock": true, 00:11:11.761 "num_base_bdevs": 4, 00:11:11.761 "num_base_bdevs_discovered": 3, 00:11:11.761 "num_base_bdevs_operational": 4, 00:11:11.761 "base_bdevs_list": [ 00:11:11.761 { 00:11:11.761 "name": "BaseBdev1", 00:11:11.761 "uuid": "aaf95b66-af3c-4afa-bc10-8857da9bca40", 00:11:11.761 "is_configured": true, 00:11:11.761 "data_offset": 2048, 00:11:11.761 "data_size": 63488 00:11:11.761 }, 00:11:11.761 { 00:11:11.761 "name": null, 00:11:11.761 "uuid": "32df77e1-5a95-4ebd-a9de-844109ac223e", 00:11:11.761 "is_configured": false, 00:11:11.761 "data_offset": 0, 00:11:11.761 "data_size": 63488 00:11:11.761 }, 00:11:11.761 { 00:11:11.761 "name": "BaseBdev3", 00:11:11.761 "uuid": "0a69a457-1b21-4cba-81c0-c3e8eb1bab83", 00:11:11.761 "is_configured": true, 00:11:11.761 "data_offset": 2048, 00:11:11.761 "data_size": 63488 00:11:11.761 }, 00:11:11.761 { 00:11:11.761 "name": "BaseBdev4", 00:11:11.761 "uuid": "68dfc25e-dc5f-4c27-883f-1a99407fe44f", 00:11:11.761 "is_configured": true, 00:11:11.761 "data_offset": 2048, 00:11:11.761 "data_size": 63488 00:11:11.761 } 00:11:11.761 ] 00:11:11.761 }' 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.761 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.329 [2024-11-26 13:24:00.763234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.329 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.330 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.330 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.330 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.330 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.330 "name": "Existed_Raid", 00:11:12.330 "uuid": "b76e5e68-e103-4651-a36f-34354a656841", 00:11:12.330 "strip_size_kb": 0, 00:11:12.330 "state": "configuring", 00:11:12.330 "raid_level": "raid1", 00:11:12.330 "superblock": true, 00:11:12.330 "num_base_bdevs": 4, 00:11:12.330 "num_base_bdevs_discovered": 2, 00:11:12.330 "num_base_bdevs_operational": 4, 00:11:12.330 "base_bdevs_list": [ 00:11:12.330 { 00:11:12.330 "name": null, 00:11:12.330 "uuid": "aaf95b66-af3c-4afa-bc10-8857da9bca40", 00:11:12.330 "is_configured": false, 00:11:12.330 "data_offset": 0, 00:11:12.330 "data_size": 63488 00:11:12.330 }, 00:11:12.330 { 00:11:12.330 "name": null, 00:11:12.330 "uuid": "32df77e1-5a95-4ebd-a9de-844109ac223e", 00:11:12.330 "is_configured": false, 00:11:12.330 "data_offset": 0, 00:11:12.330 "data_size": 63488 00:11:12.330 }, 00:11:12.330 { 00:11:12.330 "name": "BaseBdev3", 00:11:12.330 "uuid": "0a69a457-1b21-4cba-81c0-c3e8eb1bab83", 00:11:12.330 "is_configured": true, 00:11:12.330 "data_offset": 2048, 00:11:12.330 "data_size": 63488 00:11:12.330 }, 00:11:12.330 { 00:11:12.330 "name": "BaseBdev4", 00:11:12.330 "uuid": "68dfc25e-dc5f-4c27-883f-1a99407fe44f", 00:11:12.330 "is_configured": true, 00:11:12.330 "data_offset": 2048, 00:11:12.330 "data_size": 63488 00:11:12.330 } 00:11:12.330 ] 00:11:12.330 }' 00:11:12.330 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.330 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.901 [2024-11-26 13:24:01.401538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.901 "name": "Existed_Raid", 00:11:12.901 "uuid": "b76e5e68-e103-4651-a36f-34354a656841", 00:11:12.901 "strip_size_kb": 0, 00:11:12.901 "state": "configuring", 00:11:12.901 "raid_level": "raid1", 00:11:12.901 "superblock": true, 00:11:12.901 "num_base_bdevs": 4, 00:11:12.901 "num_base_bdevs_discovered": 3, 00:11:12.901 "num_base_bdevs_operational": 4, 00:11:12.901 "base_bdevs_list": [ 00:11:12.901 { 00:11:12.901 "name": null, 00:11:12.901 "uuid": "aaf95b66-af3c-4afa-bc10-8857da9bca40", 00:11:12.901 "is_configured": false, 00:11:12.901 "data_offset": 0, 00:11:12.901 "data_size": 63488 00:11:12.901 }, 00:11:12.901 { 00:11:12.901 "name": "BaseBdev2", 00:11:12.901 "uuid": "32df77e1-5a95-4ebd-a9de-844109ac223e", 00:11:12.901 "is_configured": true, 00:11:12.901 "data_offset": 2048, 00:11:12.901 "data_size": 63488 00:11:12.901 }, 00:11:12.901 { 00:11:12.901 "name": "BaseBdev3", 00:11:12.901 "uuid": "0a69a457-1b21-4cba-81c0-c3e8eb1bab83", 00:11:12.901 "is_configured": true, 00:11:12.901 "data_offset": 2048, 00:11:12.901 "data_size": 63488 00:11:12.901 }, 00:11:12.901 { 00:11:12.901 "name": "BaseBdev4", 00:11:12.901 "uuid": "68dfc25e-dc5f-4c27-883f-1a99407fe44f", 00:11:12.901 "is_configured": true, 00:11:12.901 "data_offset": 2048, 00:11:12.901 "data_size": 63488 00:11:12.901 } 00:11:12.901 ] 00:11:12.901 }' 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.901 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.467 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:13.467 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.467 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.467 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.467 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.467 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:13.467 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.467 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.467 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.467 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:13.467 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.467 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aaf95b66-af3c-4afa-bc10-8857da9bca40 00:11:13.467 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.467 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.726 [2024-11-26 13:24:02.051310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:13.726 [2024-11-26 13:24:02.051554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:13.726 [2024-11-26 13:24:02.051576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:13.726 NewBaseBdev 00:11:13.726 [2024-11-26 13:24:02.051874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:13.726 [2024-11-26 13:24:02.052062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:13.726 [2024-11-26 13:24:02.052078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:13.726 [2024-11-26 13:24:02.052225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.726 [ 00:11:13.726 { 00:11:13.726 "name": "NewBaseBdev", 00:11:13.726 "aliases": [ 00:11:13.726 "aaf95b66-af3c-4afa-bc10-8857da9bca40" 00:11:13.726 ], 00:11:13.726 "product_name": "Malloc disk", 00:11:13.726 "block_size": 512, 00:11:13.726 "num_blocks": 65536, 00:11:13.726 "uuid": "aaf95b66-af3c-4afa-bc10-8857da9bca40", 00:11:13.726 "assigned_rate_limits": { 00:11:13.726 "rw_ios_per_sec": 0, 00:11:13.726 "rw_mbytes_per_sec": 0, 00:11:13.726 "r_mbytes_per_sec": 0, 00:11:13.726 "w_mbytes_per_sec": 0 00:11:13.726 }, 00:11:13.726 "claimed": true, 00:11:13.726 "claim_type": "exclusive_write", 00:11:13.726 "zoned": false, 00:11:13.726 "supported_io_types": { 00:11:13.726 "read": true, 00:11:13.726 "write": true, 00:11:13.726 "unmap": true, 00:11:13.726 "flush": true, 00:11:13.726 "reset": true, 00:11:13.726 "nvme_admin": false, 00:11:13.726 "nvme_io": false, 00:11:13.726 "nvme_io_md": false, 00:11:13.726 "write_zeroes": true, 00:11:13.726 "zcopy": true, 00:11:13.726 "get_zone_info": false, 00:11:13.726 "zone_management": false, 00:11:13.726 "zone_append": false, 00:11:13.726 "compare": false, 00:11:13.726 "compare_and_write": false, 00:11:13.726 "abort": true, 00:11:13.726 "seek_hole": false, 00:11:13.726 "seek_data": false, 00:11:13.726 "copy": true, 00:11:13.726 "nvme_iov_md": false 00:11:13.726 }, 00:11:13.726 "memory_domains": [ 00:11:13.726 { 00:11:13.726 "dma_device_id": "system", 00:11:13.726 "dma_device_type": 1 00:11:13.726 }, 00:11:13.726 { 00:11:13.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.726 "dma_device_type": 2 00:11:13.726 } 00:11:13.726 ], 00:11:13.726 "driver_specific": {} 00:11:13.726 } 00:11:13.726 ] 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.726 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.726 "name": "Existed_Raid", 00:11:13.726 "uuid": "b76e5e68-e103-4651-a36f-34354a656841", 00:11:13.726 "strip_size_kb": 0, 00:11:13.726 "state": "online", 00:11:13.726 "raid_level": "raid1", 00:11:13.726 "superblock": true, 00:11:13.726 "num_base_bdevs": 4, 00:11:13.726 "num_base_bdevs_discovered": 4, 00:11:13.726 "num_base_bdevs_operational": 4, 00:11:13.726 "base_bdevs_list": [ 00:11:13.727 { 00:11:13.727 "name": "NewBaseBdev", 00:11:13.727 "uuid": "aaf95b66-af3c-4afa-bc10-8857da9bca40", 00:11:13.727 "is_configured": true, 00:11:13.727 "data_offset": 2048, 00:11:13.727 "data_size": 63488 00:11:13.727 }, 00:11:13.727 { 00:11:13.727 "name": "BaseBdev2", 00:11:13.727 "uuid": "32df77e1-5a95-4ebd-a9de-844109ac223e", 00:11:13.727 "is_configured": true, 00:11:13.727 "data_offset": 2048, 00:11:13.727 "data_size": 63488 00:11:13.727 }, 00:11:13.727 { 00:11:13.727 "name": "BaseBdev3", 00:11:13.727 "uuid": "0a69a457-1b21-4cba-81c0-c3e8eb1bab83", 00:11:13.727 "is_configured": true, 00:11:13.727 "data_offset": 2048, 00:11:13.727 "data_size": 63488 00:11:13.727 }, 00:11:13.727 { 00:11:13.727 "name": "BaseBdev4", 00:11:13.727 "uuid": "68dfc25e-dc5f-4c27-883f-1a99407fe44f", 00:11:13.727 "is_configured": true, 00:11:13.727 "data_offset": 2048, 00:11:13.727 "data_size": 63488 00:11:13.727 } 00:11:13.727 ] 00:11:13.727 }' 00:11:13.727 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.727 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.296 [2024-11-26 13:24:02.623791] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:14.296 "name": "Existed_Raid", 00:11:14.296 "aliases": [ 00:11:14.296 "b76e5e68-e103-4651-a36f-34354a656841" 00:11:14.296 ], 00:11:14.296 "product_name": "Raid Volume", 00:11:14.296 "block_size": 512, 00:11:14.296 "num_blocks": 63488, 00:11:14.296 "uuid": "b76e5e68-e103-4651-a36f-34354a656841", 00:11:14.296 "assigned_rate_limits": { 00:11:14.296 "rw_ios_per_sec": 0, 00:11:14.296 "rw_mbytes_per_sec": 0, 00:11:14.296 "r_mbytes_per_sec": 0, 00:11:14.296 "w_mbytes_per_sec": 0 00:11:14.296 }, 00:11:14.296 "claimed": false, 00:11:14.296 "zoned": false, 00:11:14.296 "supported_io_types": { 00:11:14.296 "read": true, 00:11:14.296 "write": true, 00:11:14.296 "unmap": false, 00:11:14.296 "flush": false, 00:11:14.296 "reset": true, 00:11:14.296 "nvme_admin": false, 00:11:14.296 "nvme_io": false, 00:11:14.296 "nvme_io_md": false, 00:11:14.296 "write_zeroes": true, 00:11:14.296 "zcopy": false, 00:11:14.296 "get_zone_info": false, 00:11:14.296 "zone_management": false, 00:11:14.296 "zone_append": false, 00:11:14.296 "compare": false, 00:11:14.296 "compare_and_write": false, 00:11:14.296 "abort": false, 00:11:14.296 "seek_hole": false, 00:11:14.296 "seek_data": false, 00:11:14.296 "copy": false, 00:11:14.296 "nvme_iov_md": false 00:11:14.296 }, 00:11:14.296 "memory_domains": [ 00:11:14.296 { 00:11:14.296 "dma_device_id": "system", 00:11:14.296 "dma_device_type": 1 00:11:14.296 }, 00:11:14.296 { 00:11:14.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.296 "dma_device_type": 2 00:11:14.296 }, 00:11:14.296 { 00:11:14.296 "dma_device_id": "system", 00:11:14.296 "dma_device_type": 1 00:11:14.296 }, 00:11:14.296 { 00:11:14.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.296 "dma_device_type": 2 00:11:14.296 }, 00:11:14.296 { 00:11:14.296 "dma_device_id": "system", 00:11:14.296 "dma_device_type": 1 00:11:14.296 }, 00:11:14.296 { 00:11:14.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.296 "dma_device_type": 2 00:11:14.296 }, 00:11:14.296 { 00:11:14.296 "dma_device_id": "system", 00:11:14.296 "dma_device_type": 1 00:11:14.296 }, 00:11:14.296 { 00:11:14.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.296 "dma_device_type": 2 00:11:14.296 } 00:11:14.296 ], 00:11:14.296 "driver_specific": { 00:11:14.296 "raid": { 00:11:14.296 "uuid": "b76e5e68-e103-4651-a36f-34354a656841", 00:11:14.296 "strip_size_kb": 0, 00:11:14.296 "state": "online", 00:11:14.296 "raid_level": "raid1", 00:11:14.296 "superblock": true, 00:11:14.296 "num_base_bdevs": 4, 00:11:14.296 "num_base_bdevs_discovered": 4, 00:11:14.296 "num_base_bdevs_operational": 4, 00:11:14.296 "base_bdevs_list": [ 00:11:14.296 { 00:11:14.296 "name": "NewBaseBdev", 00:11:14.296 "uuid": "aaf95b66-af3c-4afa-bc10-8857da9bca40", 00:11:14.296 "is_configured": true, 00:11:14.296 "data_offset": 2048, 00:11:14.296 "data_size": 63488 00:11:14.296 }, 00:11:14.296 { 00:11:14.296 "name": "BaseBdev2", 00:11:14.296 "uuid": "32df77e1-5a95-4ebd-a9de-844109ac223e", 00:11:14.296 "is_configured": true, 00:11:14.296 "data_offset": 2048, 00:11:14.296 "data_size": 63488 00:11:14.296 }, 00:11:14.296 { 00:11:14.296 "name": "BaseBdev3", 00:11:14.296 "uuid": "0a69a457-1b21-4cba-81c0-c3e8eb1bab83", 00:11:14.296 "is_configured": true, 00:11:14.296 "data_offset": 2048, 00:11:14.296 "data_size": 63488 00:11:14.296 }, 00:11:14.296 { 00:11:14.296 "name": "BaseBdev4", 00:11:14.296 "uuid": "68dfc25e-dc5f-4c27-883f-1a99407fe44f", 00:11:14.296 "is_configured": true, 00:11:14.296 "data_offset": 2048, 00:11:14.296 "data_size": 63488 00:11:14.296 } 00:11:14.296 ] 00:11:14.296 } 00:11:14.296 } 00:11:14.296 }' 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:14.296 BaseBdev2 00:11:14.296 BaseBdev3 00:11:14.296 BaseBdev4' 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.296 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.556 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.556 [2024-11-26 13:24:02.999535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.556 [2024-11-26 13:24:02.999562] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.556 [2024-11-26 13:24:02.999634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.556 [2024-11-26 13:24:02.999965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.556 [2024-11-26 13:24:02.999994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:14.556 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.556 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73410 00:11:14.556 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73410 ']' 00:11:14.556 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73410 00:11:14.556 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:14.556 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.556 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73410 00:11:14.556 killing process with pid 73410 00:11:14.556 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.556 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.556 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73410' 00:11:14.556 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73410 00:11:14.556 [2024-11-26 13:24:03.033951] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.556 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73410 00:11:14.815 [2024-11-26 13:24:03.313852] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.754 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:15.754 00:11:15.754 real 0m12.457s 00:11:15.754 user 0m20.982s 00:11:15.754 sys 0m1.731s 00:11:15.754 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.754 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.754 ************************************ 00:11:15.754 END TEST raid_state_function_test_sb 00:11:15.754 ************************************ 00:11:15.754 13:24:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:15.754 13:24:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:15.754 13:24:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.754 13:24:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.754 ************************************ 00:11:15.754 START TEST raid_superblock_test 00:11:15.754 ************************************ 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74086 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74086 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74086 ']' 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.754 13:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:16.013 [2024-11-26 13:24:04.403706] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:11:16.013 [2024-11-26 13:24:04.403897] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74086 ] 00:11:16.273 [2024-11-26 13:24:04.586191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.273 [2024-11-26 13:24:04.687818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.532 [2024-11-26 13:24:04.856369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.532 [2024-11-26 13:24:04.856433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.101 malloc1 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.101 [2024-11-26 13:24:05.431121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:17.101 [2024-11-26 13:24:05.431191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.101 [2024-11-26 13:24:05.431222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:17.101 [2024-11-26 13:24:05.431250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.101 [2024-11-26 13:24:05.433600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.101 [2024-11-26 13:24:05.433642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:17.101 pt1 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.101 malloc2 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.101 [2024-11-26 13:24:05.480841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:17.101 [2024-11-26 13:24:05.480912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.101 [2024-11-26 13:24:05.480940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:17.101 [2024-11-26 13:24:05.480953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.101 [2024-11-26 13:24:05.483370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.101 [2024-11-26 13:24:05.483425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:17.101 pt2 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.101 malloc3 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.101 [2024-11-26 13:24:05.534036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:17.101 [2024-11-26 13:24:05.534089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.101 [2024-11-26 13:24:05.534118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:17.101 [2024-11-26 13:24:05.534132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.101 [2024-11-26 13:24:05.536573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.101 [2024-11-26 13:24:05.536649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:17.101 pt3 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:17.101 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.102 malloc4 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.102 [2024-11-26 13:24:05.583251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:17.102 [2024-11-26 13:24:05.583351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.102 [2024-11-26 13:24:05.583379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:17.102 [2024-11-26 13:24:05.583394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.102 [2024-11-26 13:24:05.585951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.102 [2024-11-26 13:24:05.586009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:17.102 pt4 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.102 [2024-11-26 13:24:05.595264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:17.102 [2024-11-26 13:24:05.597440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:17.102 [2024-11-26 13:24:05.597542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:17.102 [2024-11-26 13:24:05.597603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:17.102 [2024-11-26 13:24:05.597855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:17.102 [2024-11-26 13:24:05.597887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:17.102 [2024-11-26 13:24:05.598209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:17.102 [2024-11-26 13:24:05.598464] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:17.102 [2024-11-26 13:24:05.598495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:17.102 [2024-11-26 13:24:05.598700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.102 "name": "raid_bdev1", 00:11:17.102 "uuid": "0a408758-9ad9-420f-9b6c-78728390d1d2", 00:11:17.102 "strip_size_kb": 0, 00:11:17.102 "state": "online", 00:11:17.102 "raid_level": "raid1", 00:11:17.102 "superblock": true, 00:11:17.102 "num_base_bdevs": 4, 00:11:17.102 "num_base_bdevs_discovered": 4, 00:11:17.102 "num_base_bdevs_operational": 4, 00:11:17.102 "base_bdevs_list": [ 00:11:17.102 { 00:11:17.102 "name": "pt1", 00:11:17.102 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.102 "is_configured": true, 00:11:17.102 "data_offset": 2048, 00:11:17.102 "data_size": 63488 00:11:17.102 }, 00:11:17.102 { 00:11:17.102 "name": "pt2", 00:11:17.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.102 "is_configured": true, 00:11:17.102 "data_offset": 2048, 00:11:17.102 "data_size": 63488 00:11:17.102 }, 00:11:17.102 { 00:11:17.102 "name": "pt3", 00:11:17.102 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.102 "is_configured": true, 00:11:17.102 "data_offset": 2048, 00:11:17.102 "data_size": 63488 00:11:17.102 }, 00:11:17.102 { 00:11:17.102 "name": "pt4", 00:11:17.102 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.102 "is_configured": true, 00:11:17.102 "data_offset": 2048, 00:11:17.102 "data_size": 63488 00:11:17.102 } 00:11:17.102 ] 00:11:17.102 }' 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.102 13:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.671 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:17.671 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:17.671 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.671 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.671 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.671 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.671 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.671 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.671 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.671 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.671 [2024-11-26 13:24:06.119746] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.671 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.671 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:17.671 "name": "raid_bdev1", 00:11:17.671 "aliases": [ 00:11:17.671 "0a408758-9ad9-420f-9b6c-78728390d1d2" 00:11:17.671 ], 00:11:17.671 "product_name": "Raid Volume", 00:11:17.671 "block_size": 512, 00:11:17.671 "num_blocks": 63488, 00:11:17.671 "uuid": "0a408758-9ad9-420f-9b6c-78728390d1d2", 00:11:17.671 "assigned_rate_limits": { 00:11:17.671 "rw_ios_per_sec": 0, 00:11:17.671 "rw_mbytes_per_sec": 0, 00:11:17.671 "r_mbytes_per_sec": 0, 00:11:17.671 "w_mbytes_per_sec": 0 00:11:17.671 }, 00:11:17.671 "claimed": false, 00:11:17.671 "zoned": false, 00:11:17.671 "supported_io_types": { 00:11:17.671 "read": true, 00:11:17.671 "write": true, 00:11:17.671 "unmap": false, 00:11:17.671 "flush": false, 00:11:17.671 "reset": true, 00:11:17.671 "nvme_admin": false, 00:11:17.671 "nvme_io": false, 00:11:17.671 "nvme_io_md": false, 00:11:17.671 "write_zeroes": true, 00:11:17.671 "zcopy": false, 00:11:17.671 "get_zone_info": false, 00:11:17.671 "zone_management": false, 00:11:17.671 "zone_append": false, 00:11:17.671 "compare": false, 00:11:17.671 "compare_and_write": false, 00:11:17.671 "abort": false, 00:11:17.671 "seek_hole": false, 00:11:17.671 "seek_data": false, 00:11:17.671 "copy": false, 00:11:17.671 "nvme_iov_md": false 00:11:17.671 }, 00:11:17.671 "memory_domains": [ 00:11:17.671 { 00:11:17.671 "dma_device_id": "system", 00:11:17.671 "dma_device_type": 1 00:11:17.671 }, 00:11:17.671 { 00:11:17.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.671 "dma_device_type": 2 00:11:17.671 }, 00:11:17.671 { 00:11:17.671 "dma_device_id": "system", 00:11:17.671 "dma_device_type": 1 00:11:17.671 }, 00:11:17.671 { 00:11:17.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.671 "dma_device_type": 2 00:11:17.671 }, 00:11:17.671 { 00:11:17.671 "dma_device_id": "system", 00:11:17.671 "dma_device_type": 1 00:11:17.671 }, 00:11:17.671 { 00:11:17.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.671 "dma_device_type": 2 00:11:17.671 }, 00:11:17.671 { 00:11:17.671 "dma_device_id": "system", 00:11:17.671 "dma_device_type": 1 00:11:17.671 }, 00:11:17.671 { 00:11:17.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.671 "dma_device_type": 2 00:11:17.671 } 00:11:17.671 ], 00:11:17.671 "driver_specific": { 00:11:17.671 "raid": { 00:11:17.671 "uuid": "0a408758-9ad9-420f-9b6c-78728390d1d2", 00:11:17.671 "strip_size_kb": 0, 00:11:17.671 "state": "online", 00:11:17.671 "raid_level": "raid1", 00:11:17.671 "superblock": true, 00:11:17.671 "num_base_bdevs": 4, 00:11:17.671 "num_base_bdevs_discovered": 4, 00:11:17.671 "num_base_bdevs_operational": 4, 00:11:17.671 "base_bdevs_list": [ 00:11:17.671 { 00:11:17.671 "name": "pt1", 00:11:17.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.671 "is_configured": true, 00:11:17.671 "data_offset": 2048, 00:11:17.671 "data_size": 63488 00:11:17.671 }, 00:11:17.671 { 00:11:17.671 "name": "pt2", 00:11:17.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.671 "is_configured": true, 00:11:17.671 "data_offset": 2048, 00:11:17.671 "data_size": 63488 00:11:17.671 }, 00:11:17.671 { 00:11:17.671 "name": "pt3", 00:11:17.671 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.671 "is_configured": true, 00:11:17.671 "data_offset": 2048, 00:11:17.671 "data_size": 63488 00:11:17.671 }, 00:11:17.671 { 00:11:17.671 "name": "pt4", 00:11:17.671 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.671 "is_configured": true, 00:11:17.671 "data_offset": 2048, 00:11:17.671 "data_size": 63488 00:11:17.671 } 00:11:17.671 ] 00:11:17.671 } 00:11:17.671 } 00:11:17.671 }' 00:11:17.671 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.672 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:17.672 pt2 00:11:17.672 pt3 00:11:17.672 pt4' 00:11:17.672 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.931 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.931 [2024-11-26 13:24:06.487791] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0a408758-9ad9-420f-9b6c-78728390d1d2 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0a408758-9ad9-420f-9b6c-78728390d1d2 ']' 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.191 [2024-11-26 13:24:06.535473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:18.191 [2024-11-26 13:24:06.535499] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.191 [2024-11-26 13:24:06.535566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.191 [2024-11-26 13:24:06.535661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.191 [2024-11-26 13:24:06.535681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.191 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.192 [2024-11-26 13:24:06.687512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:18.192 [2024-11-26 13:24:06.689687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:18.192 [2024-11-26 13:24:06.689751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:18.192 [2024-11-26 13:24:06.689797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:18.192 [2024-11-26 13:24:06.689853] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:18.192 [2024-11-26 13:24:06.689908] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:18.192 [2024-11-26 13:24:06.689936] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:18.192 [2024-11-26 13:24:06.689977] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:18.192 [2024-11-26 13:24:06.689995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:18.192 [2024-11-26 13:24:06.690008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:18.192 request: 00:11:18.192 { 00:11:18.192 "name": "raid_bdev1", 00:11:18.192 "raid_level": "raid1", 00:11:18.192 "base_bdevs": [ 00:11:18.192 "malloc1", 00:11:18.192 "malloc2", 00:11:18.192 "malloc3", 00:11:18.192 "malloc4" 00:11:18.192 ], 00:11:18.192 "superblock": false, 00:11:18.192 "method": "bdev_raid_create", 00:11:18.192 "req_id": 1 00:11:18.192 } 00:11:18.192 Got JSON-RPC error response 00:11:18.192 response: 00:11:18.192 { 00:11:18.192 "code": -17, 00:11:18.192 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:18.192 } 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.192 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.192 [2024-11-26 13:24:06.755518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:18.192 [2024-11-26 13:24:06.755569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.192 [2024-11-26 13:24:06.755589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:18.192 [2024-11-26 13:24:06.755618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.451 [2024-11-26 13:24:06.758011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.451 [2024-11-26 13:24:06.758055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:18.451 [2024-11-26 13:24:06.758122] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:18.451 [2024-11-26 13:24:06.758180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:18.451 pt1 00:11:18.451 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.451 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:18.451 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.451 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.451 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.451 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.451 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.451 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.451 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.452 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.452 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.452 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.452 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.452 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.452 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.452 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.452 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.452 "name": "raid_bdev1", 00:11:18.452 "uuid": "0a408758-9ad9-420f-9b6c-78728390d1d2", 00:11:18.452 "strip_size_kb": 0, 00:11:18.452 "state": "configuring", 00:11:18.452 "raid_level": "raid1", 00:11:18.452 "superblock": true, 00:11:18.452 "num_base_bdevs": 4, 00:11:18.452 "num_base_bdevs_discovered": 1, 00:11:18.452 "num_base_bdevs_operational": 4, 00:11:18.452 "base_bdevs_list": [ 00:11:18.452 { 00:11:18.452 "name": "pt1", 00:11:18.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.452 "is_configured": true, 00:11:18.452 "data_offset": 2048, 00:11:18.452 "data_size": 63488 00:11:18.452 }, 00:11:18.452 { 00:11:18.452 "name": null, 00:11:18.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.452 "is_configured": false, 00:11:18.452 "data_offset": 2048, 00:11:18.452 "data_size": 63488 00:11:18.452 }, 00:11:18.452 { 00:11:18.452 "name": null, 00:11:18.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.452 "is_configured": false, 00:11:18.452 "data_offset": 2048, 00:11:18.452 "data_size": 63488 00:11:18.452 }, 00:11:18.452 { 00:11:18.452 "name": null, 00:11:18.452 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.452 "is_configured": false, 00:11:18.452 "data_offset": 2048, 00:11:18.452 "data_size": 63488 00:11:18.452 } 00:11:18.452 ] 00:11:18.452 }' 00:11:18.452 13:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.452 13:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.710 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:18.710 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:18.710 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.711 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.711 [2024-11-26 13:24:07.271645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:18.711 [2024-11-26 13:24:07.271696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.711 [2024-11-26 13:24:07.271717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:18.711 [2024-11-26 13:24:07.271730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.711 [2024-11-26 13:24:07.272093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.711 [2024-11-26 13:24:07.272129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:18.711 [2024-11-26 13:24:07.272192] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:18.711 [2024-11-26 13:24:07.272227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:18.974 pt2 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.974 [2024-11-26 13:24:07.279695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.974 "name": "raid_bdev1", 00:11:18.974 "uuid": "0a408758-9ad9-420f-9b6c-78728390d1d2", 00:11:18.974 "strip_size_kb": 0, 00:11:18.974 "state": "configuring", 00:11:18.974 "raid_level": "raid1", 00:11:18.974 "superblock": true, 00:11:18.974 "num_base_bdevs": 4, 00:11:18.974 "num_base_bdevs_discovered": 1, 00:11:18.974 "num_base_bdevs_operational": 4, 00:11:18.974 "base_bdevs_list": [ 00:11:18.974 { 00:11:18.974 "name": "pt1", 00:11:18.974 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.974 "is_configured": true, 00:11:18.974 "data_offset": 2048, 00:11:18.974 "data_size": 63488 00:11:18.974 }, 00:11:18.974 { 00:11:18.974 "name": null, 00:11:18.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.974 "is_configured": false, 00:11:18.974 "data_offset": 0, 00:11:18.974 "data_size": 63488 00:11:18.974 }, 00:11:18.974 { 00:11:18.974 "name": null, 00:11:18.974 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.974 "is_configured": false, 00:11:18.974 "data_offset": 2048, 00:11:18.974 "data_size": 63488 00:11:18.974 }, 00:11:18.974 { 00:11:18.974 "name": null, 00:11:18.974 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.974 "is_configured": false, 00:11:18.974 "data_offset": 2048, 00:11:18.974 "data_size": 63488 00:11:18.974 } 00:11:18.974 ] 00:11:18.974 }' 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.974 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.250 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:19.250 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:19.250 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:19.250 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.250 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.250 [2024-11-26 13:24:07.799779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:19.250 [2024-11-26 13:24:07.799826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.250 [2024-11-26 13:24:07.799853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:19.250 [2024-11-26 13:24:07.799868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.250 [2024-11-26 13:24:07.800275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.250 [2024-11-26 13:24:07.800305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:19.250 [2024-11-26 13:24:07.800379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:19.250 [2024-11-26 13:24:07.800404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:19.250 pt2 00:11:19.250 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.250 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:19.250 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:19.250 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:19.250 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.250 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.537 [2024-11-26 13:24:07.811787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:19.537 [2024-11-26 13:24:07.811833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.537 [2024-11-26 13:24:07.811855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:19.537 [2024-11-26 13:24:07.811867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.537 [2024-11-26 13:24:07.812223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.537 [2024-11-26 13:24:07.812286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:19.537 [2024-11-26 13:24:07.812355] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:19.537 [2024-11-26 13:24:07.812379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:19.537 pt3 00:11:19.537 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.537 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:19.537 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:19.537 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:19.537 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.537 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.537 [2024-11-26 13:24:07.819771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:19.537 [2024-11-26 13:24:07.819813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.537 [2024-11-26 13:24:07.819834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:19.537 [2024-11-26 13:24:07.819846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.537 [2024-11-26 13:24:07.820204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.537 [2024-11-26 13:24:07.820265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:19.537 [2024-11-26 13:24:07.820336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:19.537 [2024-11-26 13:24:07.820361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:19.537 [2024-11-26 13:24:07.820510] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:19.537 [2024-11-26 13:24:07.820532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:19.537 [2024-11-26 13:24:07.820803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:19.537 [2024-11-26 13:24:07.820968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:19.537 [2024-11-26 13:24:07.820992] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:19.537 [2024-11-26 13:24:07.821119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.537 pt4 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.538 "name": "raid_bdev1", 00:11:19.538 "uuid": "0a408758-9ad9-420f-9b6c-78728390d1d2", 00:11:19.538 "strip_size_kb": 0, 00:11:19.538 "state": "online", 00:11:19.538 "raid_level": "raid1", 00:11:19.538 "superblock": true, 00:11:19.538 "num_base_bdevs": 4, 00:11:19.538 "num_base_bdevs_discovered": 4, 00:11:19.538 "num_base_bdevs_operational": 4, 00:11:19.538 "base_bdevs_list": [ 00:11:19.538 { 00:11:19.538 "name": "pt1", 00:11:19.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.538 "is_configured": true, 00:11:19.538 "data_offset": 2048, 00:11:19.538 "data_size": 63488 00:11:19.538 }, 00:11:19.538 { 00:11:19.538 "name": "pt2", 00:11:19.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.538 "is_configured": true, 00:11:19.538 "data_offset": 2048, 00:11:19.538 "data_size": 63488 00:11:19.538 }, 00:11:19.538 { 00:11:19.538 "name": "pt3", 00:11:19.538 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.538 "is_configured": true, 00:11:19.538 "data_offset": 2048, 00:11:19.538 "data_size": 63488 00:11:19.538 }, 00:11:19.538 { 00:11:19.538 "name": "pt4", 00:11:19.538 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:19.538 "is_configured": true, 00:11:19.538 "data_offset": 2048, 00:11:19.538 "data_size": 63488 00:11:19.538 } 00:11:19.538 ] 00:11:19.538 }' 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.538 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.806 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:19.806 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:19.806 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:19.806 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:19.806 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:19.806 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:19.806 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:19.806 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.806 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:19.806 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.806 [2024-11-26 13:24:08.344175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.806 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:20.065 "name": "raid_bdev1", 00:11:20.065 "aliases": [ 00:11:20.065 "0a408758-9ad9-420f-9b6c-78728390d1d2" 00:11:20.065 ], 00:11:20.065 "product_name": "Raid Volume", 00:11:20.065 "block_size": 512, 00:11:20.065 "num_blocks": 63488, 00:11:20.065 "uuid": "0a408758-9ad9-420f-9b6c-78728390d1d2", 00:11:20.065 "assigned_rate_limits": { 00:11:20.065 "rw_ios_per_sec": 0, 00:11:20.065 "rw_mbytes_per_sec": 0, 00:11:20.065 "r_mbytes_per_sec": 0, 00:11:20.065 "w_mbytes_per_sec": 0 00:11:20.065 }, 00:11:20.065 "claimed": false, 00:11:20.065 "zoned": false, 00:11:20.065 "supported_io_types": { 00:11:20.065 "read": true, 00:11:20.065 "write": true, 00:11:20.065 "unmap": false, 00:11:20.065 "flush": false, 00:11:20.065 "reset": true, 00:11:20.065 "nvme_admin": false, 00:11:20.065 "nvme_io": false, 00:11:20.065 "nvme_io_md": false, 00:11:20.065 "write_zeroes": true, 00:11:20.065 "zcopy": false, 00:11:20.065 "get_zone_info": false, 00:11:20.065 "zone_management": false, 00:11:20.065 "zone_append": false, 00:11:20.065 "compare": false, 00:11:20.065 "compare_and_write": false, 00:11:20.065 "abort": false, 00:11:20.065 "seek_hole": false, 00:11:20.065 "seek_data": false, 00:11:20.065 "copy": false, 00:11:20.065 "nvme_iov_md": false 00:11:20.065 }, 00:11:20.065 "memory_domains": [ 00:11:20.065 { 00:11:20.065 "dma_device_id": "system", 00:11:20.065 "dma_device_type": 1 00:11:20.065 }, 00:11:20.065 { 00:11:20.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.065 "dma_device_type": 2 00:11:20.065 }, 00:11:20.065 { 00:11:20.065 "dma_device_id": "system", 00:11:20.065 "dma_device_type": 1 00:11:20.065 }, 00:11:20.065 { 00:11:20.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.065 "dma_device_type": 2 00:11:20.065 }, 00:11:20.065 { 00:11:20.065 "dma_device_id": "system", 00:11:20.065 "dma_device_type": 1 00:11:20.065 }, 00:11:20.065 { 00:11:20.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.065 "dma_device_type": 2 00:11:20.065 }, 00:11:20.065 { 00:11:20.065 "dma_device_id": "system", 00:11:20.065 "dma_device_type": 1 00:11:20.065 }, 00:11:20.065 { 00:11:20.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.065 "dma_device_type": 2 00:11:20.065 } 00:11:20.065 ], 00:11:20.065 "driver_specific": { 00:11:20.065 "raid": { 00:11:20.065 "uuid": "0a408758-9ad9-420f-9b6c-78728390d1d2", 00:11:20.065 "strip_size_kb": 0, 00:11:20.065 "state": "online", 00:11:20.065 "raid_level": "raid1", 00:11:20.065 "superblock": true, 00:11:20.065 "num_base_bdevs": 4, 00:11:20.065 "num_base_bdevs_discovered": 4, 00:11:20.065 "num_base_bdevs_operational": 4, 00:11:20.065 "base_bdevs_list": [ 00:11:20.065 { 00:11:20.065 "name": "pt1", 00:11:20.065 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.065 "is_configured": true, 00:11:20.065 "data_offset": 2048, 00:11:20.065 "data_size": 63488 00:11:20.065 }, 00:11:20.065 { 00:11:20.065 "name": "pt2", 00:11:20.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.065 "is_configured": true, 00:11:20.065 "data_offset": 2048, 00:11:20.065 "data_size": 63488 00:11:20.065 }, 00:11:20.065 { 00:11:20.065 "name": "pt3", 00:11:20.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.065 "is_configured": true, 00:11:20.065 "data_offset": 2048, 00:11:20.065 "data_size": 63488 00:11:20.065 }, 00:11:20.065 { 00:11:20.065 "name": "pt4", 00:11:20.065 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:20.065 "is_configured": true, 00:11:20.065 "data_offset": 2048, 00:11:20.065 "data_size": 63488 00:11:20.065 } 00:11:20.065 ] 00:11:20.065 } 00:11:20.065 } 00:11:20.065 }' 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:20.065 pt2 00:11:20.065 pt3 00:11:20.065 pt4' 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.065 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.066 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.066 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.066 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.066 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.066 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.066 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:20.066 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.066 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.066 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.066 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:20.325 [2024-11-26 13:24:08.716218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0a408758-9ad9-420f-9b6c-78728390d1d2 '!=' 0a408758-9ad9-420f-9b6c-78728390d1d2 ']' 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.325 [2024-11-26 13:24:08.768000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.325 "name": "raid_bdev1", 00:11:20.325 "uuid": "0a408758-9ad9-420f-9b6c-78728390d1d2", 00:11:20.325 "strip_size_kb": 0, 00:11:20.325 "state": "online", 00:11:20.325 "raid_level": "raid1", 00:11:20.325 "superblock": true, 00:11:20.325 "num_base_bdevs": 4, 00:11:20.325 "num_base_bdevs_discovered": 3, 00:11:20.325 "num_base_bdevs_operational": 3, 00:11:20.325 "base_bdevs_list": [ 00:11:20.325 { 00:11:20.325 "name": null, 00:11:20.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.325 "is_configured": false, 00:11:20.325 "data_offset": 0, 00:11:20.325 "data_size": 63488 00:11:20.325 }, 00:11:20.325 { 00:11:20.325 "name": "pt2", 00:11:20.325 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.325 "is_configured": true, 00:11:20.325 "data_offset": 2048, 00:11:20.325 "data_size": 63488 00:11:20.325 }, 00:11:20.325 { 00:11:20.325 "name": "pt3", 00:11:20.325 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.325 "is_configured": true, 00:11:20.325 "data_offset": 2048, 00:11:20.325 "data_size": 63488 00:11:20.325 }, 00:11:20.325 { 00:11:20.325 "name": "pt4", 00:11:20.325 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:20.325 "is_configured": true, 00:11:20.325 "data_offset": 2048, 00:11:20.325 "data_size": 63488 00:11:20.325 } 00:11:20.325 ] 00:11:20.325 }' 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.325 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.891 [2024-11-26 13:24:09.288080] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.891 [2024-11-26 13:24:09.288109] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.891 [2024-11-26 13:24:09.288164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.891 [2024-11-26 13:24:09.288264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.891 [2024-11-26 13:24:09.288279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.891 [2024-11-26 13:24:09.376099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:20.891 [2024-11-26 13:24:09.376145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.891 [2024-11-26 13:24:09.376167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:20.891 [2024-11-26 13:24:09.376178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.891 [2024-11-26 13:24:09.378505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.891 [2024-11-26 13:24:09.378569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:20.891 [2024-11-26 13:24:09.378647] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:20.891 [2024-11-26 13:24:09.378694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:20.891 pt2 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.891 "name": "raid_bdev1", 00:11:20.891 "uuid": "0a408758-9ad9-420f-9b6c-78728390d1d2", 00:11:20.891 "strip_size_kb": 0, 00:11:20.891 "state": "configuring", 00:11:20.891 "raid_level": "raid1", 00:11:20.891 "superblock": true, 00:11:20.891 "num_base_bdevs": 4, 00:11:20.891 "num_base_bdevs_discovered": 1, 00:11:20.891 "num_base_bdevs_operational": 3, 00:11:20.891 "base_bdevs_list": [ 00:11:20.891 { 00:11:20.891 "name": null, 00:11:20.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.891 "is_configured": false, 00:11:20.891 "data_offset": 2048, 00:11:20.891 "data_size": 63488 00:11:20.891 }, 00:11:20.891 { 00:11:20.891 "name": "pt2", 00:11:20.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.891 "is_configured": true, 00:11:20.891 "data_offset": 2048, 00:11:20.891 "data_size": 63488 00:11:20.891 }, 00:11:20.891 { 00:11:20.891 "name": null, 00:11:20.891 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.891 "is_configured": false, 00:11:20.891 "data_offset": 2048, 00:11:20.891 "data_size": 63488 00:11:20.891 }, 00:11:20.891 { 00:11:20.891 "name": null, 00:11:20.891 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:20.891 "is_configured": false, 00:11:20.891 "data_offset": 2048, 00:11:20.891 "data_size": 63488 00:11:20.891 } 00:11:20.891 ] 00:11:20.891 }' 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.891 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.459 [2024-11-26 13:24:09.900206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:21.459 [2024-11-26 13:24:09.900283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.459 [2024-11-26 13:24:09.900307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:21.459 [2024-11-26 13:24:09.900320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.459 [2024-11-26 13:24:09.900746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.459 [2024-11-26 13:24:09.900776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:21.459 [2024-11-26 13:24:09.900847] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:21.459 [2024-11-26 13:24:09.900870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:21.459 pt3 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.459 "name": "raid_bdev1", 00:11:21.459 "uuid": "0a408758-9ad9-420f-9b6c-78728390d1d2", 00:11:21.459 "strip_size_kb": 0, 00:11:21.459 "state": "configuring", 00:11:21.459 "raid_level": "raid1", 00:11:21.459 "superblock": true, 00:11:21.459 "num_base_bdevs": 4, 00:11:21.459 "num_base_bdevs_discovered": 2, 00:11:21.459 "num_base_bdevs_operational": 3, 00:11:21.459 "base_bdevs_list": [ 00:11:21.459 { 00:11:21.459 "name": null, 00:11:21.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.459 "is_configured": false, 00:11:21.459 "data_offset": 2048, 00:11:21.459 "data_size": 63488 00:11:21.459 }, 00:11:21.459 { 00:11:21.459 "name": "pt2", 00:11:21.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.459 "is_configured": true, 00:11:21.459 "data_offset": 2048, 00:11:21.459 "data_size": 63488 00:11:21.459 }, 00:11:21.459 { 00:11:21.459 "name": "pt3", 00:11:21.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:21.459 "is_configured": true, 00:11:21.459 "data_offset": 2048, 00:11:21.459 "data_size": 63488 00:11:21.459 }, 00:11:21.459 { 00:11:21.459 "name": null, 00:11:21.459 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:21.459 "is_configured": false, 00:11:21.459 "data_offset": 2048, 00:11:21.459 "data_size": 63488 00:11:21.459 } 00:11:21.459 ] 00:11:21.459 }' 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.459 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.028 [2024-11-26 13:24:10.424395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:22.028 [2024-11-26 13:24:10.424452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.028 [2024-11-26 13:24:10.424478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:22.028 [2024-11-26 13:24:10.424491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.028 [2024-11-26 13:24:10.424955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.028 [2024-11-26 13:24:10.424990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:22.028 [2024-11-26 13:24:10.425081] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:22.028 [2024-11-26 13:24:10.425129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:22.028 [2024-11-26 13:24:10.425325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:22.028 [2024-11-26 13:24:10.425351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:22.028 [2024-11-26 13:24:10.425671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:22.028 [2024-11-26 13:24:10.425893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:22.028 [2024-11-26 13:24:10.425922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:22.028 [2024-11-26 13:24:10.426078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.028 pt4 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.028 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.028 "name": "raid_bdev1", 00:11:22.028 "uuid": "0a408758-9ad9-420f-9b6c-78728390d1d2", 00:11:22.028 "strip_size_kb": 0, 00:11:22.028 "state": "online", 00:11:22.028 "raid_level": "raid1", 00:11:22.029 "superblock": true, 00:11:22.029 "num_base_bdevs": 4, 00:11:22.029 "num_base_bdevs_discovered": 3, 00:11:22.029 "num_base_bdevs_operational": 3, 00:11:22.029 "base_bdevs_list": [ 00:11:22.029 { 00:11:22.029 "name": null, 00:11:22.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.029 "is_configured": false, 00:11:22.029 "data_offset": 2048, 00:11:22.029 "data_size": 63488 00:11:22.029 }, 00:11:22.029 { 00:11:22.029 "name": "pt2", 00:11:22.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.029 "is_configured": true, 00:11:22.029 "data_offset": 2048, 00:11:22.029 "data_size": 63488 00:11:22.029 }, 00:11:22.029 { 00:11:22.029 "name": "pt3", 00:11:22.029 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:22.029 "is_configured": true, 00:11:22.029 "data_offset": 2048, 00:11:22.029 "data_size": 63488 00:11:22.029 }, 00:11:22.029 { 00:11:22.029 "name": "pt4", 00:11:22.029 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:22.029 "is_configured": true, 00:11:22.029 "data_offset": 2048, 00:11:22.029 "data_size": 63488 00:11:22.029 } 00:11:22.029 ] 00:11:22.029 }' 00:11:22.029 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.029 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.599 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:22.599 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.599 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.599 [2024-11-26 13:24:10.948466] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.599 [2024-11-26 13:24:10.948495] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.599 [2024-11-26 13:24:10.948570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.599 [2024-11-26 13:24:10.948686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.599 [2024-11-26 13:24:10.948702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:22.599 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.599 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.599 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:22.599 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.599 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.599 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.599 [2024-11-26 13:24:11.020491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:22.599 [2024-11-26 13:24:11.020579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.599 [2024-11-26 13:24:11.020615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:22.599 [2024-11-26 13:24:11.020631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.599 [2024-11-26 13:24:11.023147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.599 [2024-11-26 13:24:11.023210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:22.599 [2024-11-26 13:24:11.023329] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:22.599 [2024-11-26 13:24:11.023382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:22.599 [2024-11-26 13:24:11.023553] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:22.599 [2024-11-26 13:24:11.023586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.599 [2024-11-26 13:24:11.023605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:22.599 [2024-11-26 13:24:11.023695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:22.599 [2024-11-26 13:24:11.023833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:22.599 pt1 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.599 "name": "raid_bdev1", 00:11:22.599 "uuid": "0a408758-9ad9-420f-9b6c-78728390d1d2", 00:11:22.599 "strip_size_kb": 0, 00:11:22.599 "state": "configuring", 00:11:22.599 "raid_level": "raid1", 00:11:22.599 "superblock": true, 00:11:22.599 "num_base_bdevs": 4, 00:11:22.599 "num_base_bdevs_discovered": 2, 00:11:22.599 "num_base_bdevs_operational": 3, 00:11:22.599 "base_bdevs_list": [ 00:11:22.599 { 00:11:22.599 "name": null, 00:11:22.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.599 "is_configured": false, 00:11:22.599 "data_offset": 2048, 00:11:22.599 "data_size": 63488 00:11:22.599 }, 00:11:22.599 { 00:11:22.599 "name": "pt2", 00:11:22.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.599 "is_configured": true, 00:11:22.599 "data_offset": 2048, 00:11:22.599 "data_size": 63488 00:11:22.599 }, 00:11:22.599 { 00:11:22.599 "name": "pt3", 00:11:22.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:22.599 "is_configured": true, 00:11:22.599 "data_offset": 2048, 00:11:22.599 "data_size": 63488 00:11:22.599 }, 00:11:22.599 { 00:11:22.599 "name": null, 00:11:22.599 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:22.599 "is_configured": false, 00:11:22.599 "data_offset": 2048, 00:11:22.599 "data_size": 63488 00:11:22.599 } 00:11:22.599 ] 00:11:22.599 }' 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.599 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.168 [2024-11-26 13:24:11.592624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:23.168 [2024-11-26 13:24:11.592723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.168 [2024-11-26 13:24:11.592748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:23.168 [2024-11-26 13:24:11.592760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.168 [2024-11-26 13:24:11.593162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.168 [2024-11-26 13:24:11.593197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:23.168 [2024-11-26 13:24:11.593320] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:23.168 [2024-11-26 13:24:11.593371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:23.168 [2024-11-26 13:24:11.593514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:23.168 [2024-11-26 13:24:11.593539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:23.168 [2024-11-26 13:24:11.593831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:23.168 [2024-11-26 13:24:11.594031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:23.168 [2024-11-26 13:24:11.594060] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:23.168 [2024-11-26 13:24:11.594219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.168 pt4 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.168 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.168 "name": "raid_bdev1", 00:11:23.168 "uuid": "0a408758-9ad9-420f-9b6c-78728390d1d2", 00:11:23.168 "strip_size_kb": 0, 00:11:23.168 "state": "online", 00:11:23.169 "raid_level": "raid1", 00:11:23.169 "superblock": true, 00:11:23.169 "num_base_bdevs": 4, 00:11:23.169 "num_base_bdevs_discovered": 3, 00:11:23.169 "num_base_bdevs_operational": 3, 00:11:23.169 "base_bdevs_list": [ 00:11:23.169 { 00:11:23.169 "name": null, 00:11:23.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.169 "is_configured": false, 00:11:23.169 "data_offset": 2048, 00:11:23.169 "data_size": 63488 00:11:23.169 }, 00:11:23.169 { 00:11:23.169 "name": "pt2", 00:11:23.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:23.169 "is_configured": true, 00:11:23.169 "data_offset": 2048, 00:11:23.169 "data_size": 63488 00:11:23.169 }, 00:11:23.169 { 00:11:23.169 "name": "pt3", 00:11:23.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:23.169 "is_configured": true, 00:11:23.169 "data_offset": 2048, 00:11:23.169 "data_size": 63488 00:11:23.169 }, 00:11:23.169 { 00:11:23.169 "name": "pt4", 00:11:23.169 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:23.169 "is_configured": true, 00:11:23.169 "data_offset": 2048, 00:11:23.169 "data_size": 63488 00:11:23.169 } 00:11:23.169 ] 00:11:23.169 }' 00:11:23.169 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.169 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.738 [2024-11-26 13:24:12.181033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0a408758-9ad9-420f-9b6c-78728390d1d2 '!=' 0a408758-9ad9-420f-9b6c-78728390d1d2 ']' 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74086 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74086 ']' 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74086 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74086 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.738 killing process with pid 74086 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74086' 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74086 00:11:23.738 [2024-11-26 13:24:12.248746] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.738 13:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74086 00:11:23.738 [2024-11-26 13:24:12.248831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.738 [2024-11-26 13:24:12.248905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.738 [2024-11-26 13:24:12.248923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:23.997 [2024-11-26 13:24:12.519671] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:24.934 13:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:24.934 00:11:24.934 real 0m9.066s 00:11:24.934 user 0m15.186s 00:11:24.934 sys 0m1.323s 00:11:24.934 13:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.934 13:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.934 ************************************ 00:11:24.934 END TEST raid_superblock_test 00:11:24.934 ************************************ 00:11:24.934 13:24:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:24.934 13:24:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:24.934 13:24:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.934 13:24:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:24.934 ************************************ 00:11:24.934 START TEST raid_read_error_test 00:11:24.934 ************************************ 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ejsZXlCDLe 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74584 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74584 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74584 ']' 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.934 13:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 [2024-11-26 13:24:13.513966] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:11:25.194 [2024-11-26 13:24:13.514132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74584 ] 00:11:25.194 [2024-11-26 13:24:13.672306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.453 [2024-11-26 13:24:13.778488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.453 [2024-11-26 13:24:13.947069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.453 [2024-11-26 13:24:13.947136] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.021 BaseBdev1_malloc 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.021 true 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.021 [2024-11-26 13:24:14.546112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:26.021 [2024-11-26 13:24:14.546193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.021 [2024-11-26 13:24:14.546219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:26.021 [2024-11-26 13:24:14.546255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.021 [2024-11-26 13:24:14.548823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.021 [2024-11-26 13:24:14.548901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:26.021 BaseBdev1 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.021 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.280 BaseBdev2_malloc 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.280 true 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.280 [2024-11-26 13:24:14.600983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:26.280 [2024-11-26 13:24:14.601058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.280 [2024-11-26 13:24:14.601081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:26.280 [2024-11-26 13:24:14.601096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.280 [2024-11-26 13:24:14.603750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.280 [2024-11-26 13:24:14.603826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:26.280 BaseBdev2 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.280 BaseBdev3_malloc 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.280 true 00:11:26.280 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.281 [2024-11-26 13:24:14.664171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:26.281 [2024-11-26 13:24:14.664229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.281 [2024-11-26 13:24:14.664286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:26.281 [2024-11-26 13:24:14.664303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.281 [2024-11-26 13:24:14.666859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.281 [2024-11-26 13:24:14.666904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:26.281 BaseBdev3 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.281 BaseBdev4_malloc 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.281 true 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.281 [2024-11-26 13:24:14.718371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:26.281 [2024-11-26 13:24:14.718604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.281 [2024-11-26 13:24:14.718638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:26.281 [2024-11-26 13:24:14.718656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.281 [2024-11-26 13:24:14.721130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.281 [2024-11-26 13:24:14.721178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:26.281 BaseBdev4 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.281 [2024-11-26 13:24:14.726436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.281 [2024-11-26 13:24:14.728764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.281 [2024-11-26 13:24:14.728860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.281 [2024-11-26 13:24:14.728945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:26.281 [2024-11-26 13:24:14.729195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:26.281 [2024-11-26 13:24:14.729215] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:26.281 [2024-11-26 13:24:14.729489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:26.281 [2024-11-26 13:24:14.729672] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:26.281 [2024-11-26 13:24:14.729686] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:26.281 [2024-11-26 13:24:14.729848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.281 "name": "raid_bdev1", 00:11:26.281 "uuid": "c502d69f-e863-4844-a96c-991850888737", 00:11:26.281 "strip_size_kb": 0, 00:11:26.281 "state": "online", 00:11:26.281 "raid_level": "raid1", 00:11:26.281 "superblock": true, 00:11:26.281 "num_base_bdevs": 4, 00:11:26.281 "num_base_bdevs_discovered": 4, 00:11:26.281 "num_base_bdevs_operational": 4, 00:11:26.281 "base_bdevs_list": [ 00:11:26.281 { 00:11:26.281 "name": "BaseBdev1", 00:11:26.281 "uuid": "f9f38db6-aec1-54be-87c6-cb4f05a8a261", 00:11:26.281 "is_configured": true, 00:11:26.281 "data_offset": 2048, 00:11:26.281 "data_size": 63488 00:11:26.281 }, 00:11:26.281 { 00:11:26.281 "name": "BaseBdev2", 00:11:26.281 "uuid": "8ffcd5cb-a50a-546f-9b30-d3e77425c37e", 00:11:26.281 "is_configured": true, 00:11:26.281 "data_offset": 2048, 00:11:26.281 "data_size": 63488 00:11:26.281 }, 00:11:26.281 { 00:11:26.281 "name": "BaseBdev3", 00:11:26.281 "uuid": "1a271e1e-51a6-50d5-8c8a-50eeb6a3632e", 00:11:26.281 "is_configured": true, 00:11:26.281 "data_offset": 2048, 00:11:26.281 "data_size": 63488 00:11:26.281 }, 00:11:26.281 { 00:11:26.281 "name": "BaseBdev4", 00:11:26.281 "uuid": "b6f0d319-da39-5620-93fd-cfbc8d813db7", 00:11:26.281 "is_configured": true, 00:11:26.281 "data_offset": 2048, 00:11:26.281 "data_size": 63488 00:11:26.281 } 00:11:26.281 ] 00:11:26.281 }' 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.281 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.849 13:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:26.849 13:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:26.849 [2024-11-26 13:24:15.371734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:27.786 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.787 "name": "raid_bdev1", 00:11:27.787 "uuid": "c502d69f-e863-4844-a96c-991850888737", 00:11:27.787 "strip_size_kb": 0, 00:11:27.787 "state": "online", 00:11:27.787 "raid_level": "raid1", 00:11:27.787 "superblock": true, 00:11:27.787 "num_base_bdevs": 4, 00:11:27.787 "num_base_bdevs_discovered": 4, 00:11:27.787 "num_base_bdevs_operational": 4, 00:11:27.787 "base_bdevs_list": [ 00:11:27.787 { 00:11:27.787 "name": "BaseBdev1", 00:11:27.787 "uuid": "f9f38db6-aec1-54be-87c6-cb4f05a8a261", 00:11:27.787 "is_configured": true, 00:11:27.787 "data_offset": 2048, 00:11:27.787 "data_size": 63488 00:11:27.787 }, 00:11:27.787 { 00:11:27.787 "name": "BaseBdev2", 00:11:27.787 "uuid": "8ffcd5cb-a50a-546f-9b30-d3e77425c37e", 00:11:27.787 "is_configured": true, 00:11:27.787 "data_offset": 2048, 00:11:27.787 "data_size": 63488 00:11:27.787 }, 00:11:27.787 { 00:11:27.787 "name": "BaseBdev3", 00:11:27.787 "uuid": "1a271e1e-51a6-50d5-8c8a-50eeb6a3632e", 00:11:27.787 "is_configured": true, 00:11:27.787 "data_offset": 2048, 00:11:27.787 "data_size": 63488 00:11:27.787 }, 00:11:27.787 { 00:11:27.787 "name": "BaseBdev4", 00:11:27.787 "uuid": "b6f0d319-da39-5620-93fd-cfbc8d813db7", 00:11:27.787 "is_configured": true, 00:11:27.787 "data_offset": 2048, 00:11:27.787 "data_size": 63488 00:11:27.787 } 00:11:27.787 ] 00:11:27.787 }' 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.787 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.355 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:28.355 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.355 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.355 [2024-11-26 13:24:16.820388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.355 [2024-11-26 13:24:16.820424] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.355 [2024-11-26 13:24:16.823773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.355 [2024-11-26 13:24:16.823843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.355 [2024-11-26 13:24:16.823978] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.355 [2024-11-26 13:24:16.823996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:28.355 { 00:11:28.355 "results": [ 00:11:28.355 { 00:11:28.355 "job": "raid_bdev1", 00:11:28.355 "core_mask": "0x1", 00:11:28.355 "workload": "randrw", 00:11:28.355 "percentage": 50, 00:11:28.355 "status": "finished", 00:11:28.355 "queue_depth": 1, 00:11:28.355 "io_size": 131072, 00:11:28.355 "runtime": 1.446657, 00:11:28.355 "iops": 9130.014924062856, 00:11:28.355 "mibps": 1141.251865507857, 00:11:28.355 "io_failed": 0, 00:11:28.355 "io_timeout": 0, 00:11:28.355 "avg_latency_us": 106.00553714002533, 00:11:28.355 "min_latency_us": 36.305454545454545, 00:11:28.355 "max_latency_us": 1623.5054545454545 00:11:28.355 } 00:11:28.355 ], 00:11:28.355 "core_count": 1 00:11:28.355 } 00:11:28.355 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.355 13:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74584 00:11:28.355 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74584 ']' 00:11:28.355 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74584 00:11:28.355 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:28.355 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.355 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74584 00:11:28.355 killing process with pid 74584 00:11:28.355 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.355 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.355 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74584' 00:11:28.355 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74584 00:11:28.355 [2024-11-26 13:24:16.861084] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:28.355 13:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74584 00:11:28.614 [2024-11-26 13:24:17.082853] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.551 13:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:29.551 13:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ejsZXlCDLe 00:11:29.551 13:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:29.551 13:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:29.551 13:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:29.551 13:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:29.551 13:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:29.551 13:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:29.551 00:11:29.551 real 0m4.552s 00:11:29.552 user 0m5.683s 00:11:29.552 sys 0m0.599s 00:11:29.552 ************************************ 00:11:29.552 END TEST raid_read_error_test 00:11:29.552 ************************************ 00:11:29.552 13:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.552 13:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.552 13:24:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:29.552 13:24:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:29.552 13:24:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.552 13:24:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.552 ************************************ 00:11:29.552 START TEST raid_write_error_test 00:11:29.552 ************************************ 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iUHGNr0cLP 00:11:29.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74724 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74724 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74724 ']' 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.552 13:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.811 [2024-11-26 13:24:18.155087] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:11:29.811 [2024-11-26 13:24:18.155293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74724 ] 00:11:29.811 [2024-11-26 13:24:18.337536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.070 [2024-11-26 13:24:18.444796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.070 [2024-11-26 13:24:18.613099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.071 [2024-11-26 13:24:18.613159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.639 BaseBdev1_malloc 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.639 true 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.639 [2024-11-26 13:24:19.182344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:30.639 [2024-11-26 13:24:19.182597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.639 [2024-11-26 13:24:19.182635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:30.639 [2024-11-26 13:24:19.182652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.639 [2024-11-26 13:24:19.185324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.639 [2024-11-26 13:24:19.185371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:30.639 BaseBdev1 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.639 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.898 BaseBdev2_malloc 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.898 true 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.898 [2024-11-26 13:24:19.232459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:30.898 [2024-11-26 13:24:19.232531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.898 [2024-11-26 13:24:19.232555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:30.898 [2024-11-26 13:24:19.232569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.898 [2024-11-26 13:24:19.235095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.898 [2024-11-26 13:24:19.235142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:30.898 BaseBdev2 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.898 BaseBdev3_malloc 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.898 true 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.898 [2024-11-26 13:24:19.297128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:30.898 [2024-11-26 13:24:19.297198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.898 [2024-11-26 13:24:19.297221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:30.898 [2024-11-26 13:24:19.297236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.898 [2024-11-26 13:24:19.299762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.898 [2024-11-26 13:24:19.299806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:30.898 BaseBdev3 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.898 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.898 BaseBdev4_malloc 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.899 true 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.899 [2024-11-26 13:24:19.347368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:30.899 [2024-11-26 13:24:19.347439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.899 [2024-11-26 13:24:19.347462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:30.899 [2024-11-26 13:24:19.347477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.899 [2024-11-26 13:24:19.349927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.899 [2024-11-26 13:24:19.349973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:30.899 BaseBdev4 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.899 [2024-11-26 13:24:19.355415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.899 [2024-11-26 13:24:19.357652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.899 [2024-11-26 13:24:19.357748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.899 [2024-11-26 13:24:19.357834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:30.899 [2024-11-26 13:24:19.358087] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:30.899 [2024-11-26 13:24:19.358108] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:30.899 [2024-11-26 13:24:19.358383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:30.899 [2024-11-26 13:24:19.358588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:30.899 [2024-11-26 13:24:19.358603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:30.899 [2024-11-26 13:24:19.358758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.899 "name": "raid_bdev1", 00:11:30.899 "uuid": "f2fdaea1-1fd9-4fac-9e7f-006f26ffcc99", 00:11:30.899 "strip_size_kb": 0, 00:11:30.899 "state": "online", 00:11:30.899 "raid_level": "raid1", 00:11:30.899 "superblock": true, 00:11:30.899 "num_base_bdevs": 4, 00:11:30.899 "num_base_bdevs_discovered": 4, 00:11:30.899 "num_base_bdevs_operational": 4, 00:11:30.899 "base_bdevs_list": [ 00:11:30.899 { 00:11:30.899 "name": "BaseBdev1", 00:11:30.899 "uuid": "d27fa6ed-7dc2-5a2e-829b-e523f009b10e", 00:11:30.899 "is_configured": true, 00:11:30.899 "data_offset": 2048, 00:11:30.899 "data_size": 63488 00:11:30.899 }, 00:11:30.899 { 00:11:30.899 "name": "BaseBdev2", 00:11:30.899 "uuid": "76ef8a06-f949-5bed-8911-3611f0206fc0", 00:11:30.899 "is_configured": true, 00:11:30.899 "data_offset": 2048, 00:11:30.899 "data_size": 63488 00:11:30.899 }, 00:11:30.899 { 00:11:30.899 "name": "BaseBdev3", 00:11:30.899 "uuid": "7013d7d6-5446-5979-a581-bac5a9276804", 00:11:30.899 "is_configured": true, 00:11:30.899 "data_offset": 2048, 00:11:30.899 "data_size": 63488 00:11:30.899 }, 00:11:30.899 { 00:11:30.899 "name": "BaseBdev4", 00:11:30.899 "uuid": "c5d7a846-0514-5ad9-87ba-7fe900f4268c", 00:11:30.899 "is_configured": true, 00:11:30.899 "data_offset": 2048, 00:11:30.899 "data_size": 63488 00:11:30.899 } 00:11:30.899 ] 00:11:30.899 }' 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.899 13:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.466 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:31.466 13:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:31.466 [2024-11-26 13:24:19.980759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:32.403 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:32.403 13:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.403 13:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.403 [2024-11-26 13:24:20.858116] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:32.403 [2024-11-26 13:24:20.858388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:32.403 [2024-11-26 13:24:20.858701] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:11:32.403 13:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.404 "name": "raid_bdev1", 00:11:32.404 "uuid": "f2fdaea1-1fd9-4fac-9e7f-006f26ffcc99", 00:11:32.404 "strip_size_kb": 0, 00:11:32.404 "state": "online", 00:11:32.404 "raid_level": "raid1", 00:11:32.404 "superblock": true, 00:11:32.404 "num_base_bdevs": 4, 00:11:32.404 "num_base_bdevs_discovered": 3, 00:11:32.404 "num_base_bdevs_operational": 3, 00:11:32.404 "base_bdevs_list": [ 00:11:32.404 { 00:11:32.404 "name": null, 00:11:32.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.404 "is_configured": false, 00:11:32.404 "data_offset": 0, 00:11:32.404 "data_size": 63488 00:11:32.404 }, 00:11:32.404 { 00:11:32.404 "name": "BaseBdev2", 00:11:32.404 "uuid": "76ef8a06-f949-5bed-8911-3611f0206fc0", 00:11:32.404 "is_configured": true, 00:11:32.404 "data_offset": 2048, 00:11:32.404 "data_size": 63488 00:11:32.404 }, 00:11:32.404 { 00:11:32.404 "name": "BaseBdev3", 00:11:32.404 "uuid": "7013d7d6-5446-5979-a581-bac5a9276804", 00:11:32.404 "is_configured": true, 00:11:32.404 "data_offset": 2048, 00:11:32.404 "data_size": 63488 00:11:32.404 }, 00:11:32.404 { 00:11:32.404 "name": "BaseBdev4", 00:11:32.404 "uuid": "c5d7a846-0514-5ad9-87ba-7fe900f4268c", 00:11:32.404 "is_configured": true, 00:11:32.404 "data_offset": 2048, 00:11:32.404 "data_size": 63488 00:11:32.404 } 00:11:32.404 ] 00:11:32.404 }' 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.404 13:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.972 13:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:32.972 13:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.972 13:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.972 [2024-11-26 13:24:21.385636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.972 [2024-11-26 13:24:21.385666] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.972 [2024-11-26 13:24:21.388775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.972 [2024-11-26 13:24:21.388948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.972 [2024-11-26 13:24:21.389120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.972 [2024-11-26 13:24:21.389383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:32.972 { 00:11:32.972 "results": [ 00:11:32.972 { 00:11:32.972 "job": "raid_bdev1", 00:11:32.972 "core_mask": "0x1", 00:11:32.972 "workload": "randrw", 00:11:32.972 "percentage": 50, 00:11:32.972 "status": "finished", 00:11:32.972 "queue_depth": 1, 00:11:32.972 "io_size": 131072, 00:11:32.972 "runtime": 1.402687, 00:11:32.972 "iops": 10032.886880679724, 00:11:32.972 "mibps": 1254.1108600849655, 00:11:32.972 "io_failed": 0, 00:11:32.972 "io_timeout": 0, 00:11:32.972 "avg_latency_us": 96.0921119099759, 00:11:32.972 "min_latency_us": 36.305454545454545, 00:11:32.972 "max_latency_us": 1653.2945454545454 00:11:32.972 } 00:11:32.972 ], 00:11:32.972 "core_count": 1 00:11:32.972 } 00:11:32.972 13:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.972 13:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74724 00:11:32.972 13:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74724 ']' 00:11:32.972 13:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74724 00:11:32.972 13:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:32.972 13:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.972 13:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74724 00:11:32.972 killing process with pid 74724 00:11:32.972 13:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.972 13:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.972 13:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74724' 00:11:32.972 13:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74724 00:11:32.972 [2024-11-26 13:24:21.426805] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.972 13:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74724 00:11:33.231 [2024-11-26 13:24:21.650059] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.168 13:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:34.168 13:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iUHGNr0cLP 00:11:34.168 13:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:34.168 13:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:34.168 13:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:34.168 13:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:34.168 13:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:34.168 13:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:34.168 00:11:34.168 real 0m4.509s 00:11:34.168 user 0m5.611s 00:11:34.168 sys 0m0.589s 00:11:34.168 13:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.168 13:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.168 ************************************ 00:11:34.168 END TEST raid_write_error_test 00:11:34.168 ************************************ 00:11:34.168 13:24:22 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:34.168 13:24:22 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:34.168 13:24:22 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:34.168 13:24:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:34.168 13:24:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.168 13:24:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.168 ************************************ 00:11:34.168 START TEST raid_rebuild_test 00:11:34.168 ************************************ 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:34.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=74868 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 74868 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 74868 ']' 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.168 13:24:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.168 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:34.168 Zero copy mechanism will not be used. 00:11:34.168 [2024-11-26 13:24:22.712519] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:11:34.168 [2024-11-26 13:24:22.712707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74868 ] 00:11:34.427 [2024-11-26 13:24:22.892303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.686 [2024-11-26 13:24:22.993540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.686 [2024-11-26 13:24:23.162383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.686 [2024-11-26 13:24:23.162719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.253 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.254 BaseBdev1_malloc 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.254 [2024-11-26 13:24:23.615864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:35.254 [2024-11-26 13:24:23.615946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.254 [2024-11-26 13:24:23.615974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:35.254 [2024-11-26 13:24:23.615989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.254 [2024-11-26 13:24:23.618459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.254 [2024-11-26 13:24:23.618520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:35.254 BaseBdev1 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.254 BaseBdev2_malloc 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.254 [2024-11-26 13:24:23.661849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:35.254 [2024-11-26 13:24:23.661930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.254 [2024-11-26 13:24:23.661954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:35.254 [2024-11-26 13:24:23.661971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.254 [2024-11-26 13:24:23.664717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.254 [2024-11-26 13:24:23.664905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:35.254 BaseBdev2 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.254 spare_malloc 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.254 spare_delay 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.254 [2024-11-26 13:24:23.728063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:35.254 [2024-11-26 13:24:23.728366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.254 [2024-11-26 13:24:23.728405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:35.254 [2024-11-26 13:24:23.728424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.254 [2024-11-26 13:24:23.731075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.254 [2024-11-26 13:24:23.731123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:35.254 spare 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.254 [2024-11-26 13:24:23.736118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.254 [2024-11-26 13:24:23.738620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.254 [2024-11-26 13:24:23.738729] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:35.254 [2024-11-26 13:24:23.738750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:35.254 [2024-11-26 13:24:23.739031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:35.254 [2024-11-26 13:24:23.739201] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:35.254 [2024-11-26 13:24:23.739217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:35.254 [2024-11-26 13:24:23.739418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.254 "name": "raid_bdev1", 00:11:35.254 "uuid": "fafc1cb1-7260-4f4a-a941-838cad5b2200", 00:11:35.254 "strip_size_kb": 0, 00:11:35.254 "state": "online", 00:11:35.254 "raid_level": "raid1", 00:11:35.254 "superblock": false, 00:11:35.254 "num_base_bdevs": 2, 00:11:35.254 "num_base_bdevs_discovered": 2, 00:11:35.254 "num_base_bdevs_operational": 2, 00:11:35.254 "base_bdevs_list": [ 00:11:35.254 { 00:11:35.254 "name": "BaseBdev1", 00:11:35.254 "uuid": "0fe32145-531e-5b4f-bc23-53737732fc65", 00:11:35.254 "is_configured": true, 00:11:35.254 "data_offset": 0, 00:11:35.254 "data_size": 65536 00:11:35.254 }, 00:11:35.254 { 00:11:35.254 "name": "BaseBdev2", 00:11:35.254 "uuid": "f6a653f3-3e87-5850-a4fe-e1d6a8d2df8c", 00:11:35.254 "is_configured": true, 00:11:35.254 "data_offset": 0, 00:11:35.254 "data_size": 65536 00:11:35.254 } 00:11:35.254 ] 00:11:35.254 }' 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.254 13:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:35.823 [2024-11-26 13:24:24.256487] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:35.823 13:24:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:36.082 [2024-11-26 13:24:24.632355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:36.082 /dev/nbd0 00:11:36.341 13:24:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:36.341 13:24:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:36.341 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:36.341 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:36.341 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:36.341 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:36.341 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:36.341 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:36.341 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:36.341 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:36.341 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.341 1+0 records in 00:11:36.341 1+0 records out 00:11:36.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574498 s, 7.1 MB/s 00:11:36.342 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.342 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:36.342 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.342 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:36.342 13:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:36.342 13:24:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.342 13:24:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:36.342 13:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:36.342 13:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:36.342 13:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:41.615 65536+0 records in 00:11:41.615 65536+0 records out 00:11:41.615 33554432 bytes (34 MB, 32 MiB) copied, 5.4877 s, 6.1 MB/s 00:11:41.615 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:41.615 13:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:41.615 13:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:41.615 13:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:41.615 13:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:41.615 13:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:41.615 13:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:42.183 [2024-11-26 13:24:30.471596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.183 [2024-11-26 13:24:30.483689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.183 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.183 "name": "raid_bdev1", 00:11:42.183 "uuid": "fafc1cb1-7260-4f4a-a941-838cad5b2200", 00:11:42.183 "strip_size_kb": 0, 00:11:42.183 "state": "online", 00:11:42.183 "raid_level": "raid1", 00:11:42.183 "superblock": false, 00:11:42.183 "num_base_bdevs": 2, 00:11:42.183 "num_base_bdevs_discovered": 1, 00:11:42.183 "num_base_bdevs_operational": 1, 00:11:42.184 "base_bdevs_list": [ 00:11:42.184 { 00:11:42.184 "name": null, 00:11:42.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.184 "is_configured": false, 00:11:42.184 "data_offset": 0, 00:11:42.184 "data_size": 65536 00:11:42.184 }, 00:11:42.184 { 00:11:42.184 "name": "BaseBdev2", 00:11:42.184 "uuid": "f6a653f3-3e87-5850-a4fe-e1d6a8d2df8c", 00:11:42.184 "is_configured": true, 00:11:42.184 "data_offset": 0, 00:11:42.184 "data_size": 65536 00:11:42.184 } 00:11:42.184 ] 00:11:42.184 }' 00:11:42.184 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.184 13:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.443 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:42.443 13:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.443 13:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.443 [2024-11-26 13:24:30.983839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:42.443 [2024-11-26 13:24:30.998458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:11:42.443 13:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.443 13:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:42.443 [2024-11-26 13:24:31.000895] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.820 "name": "raid_bdev1", 00:11:43.820 "uuid": "fafc1cb1-7260-4f4a-a941-838cad5b2200", 00:11:43.820 "strip_size_kb": 0, 00:11:43.820 "state": "online", 00:11:43.820 "raid_level": "raid1", 00:11:43.820 "superblock": false, 00:11:43.820 "num_base_bdevs": 2, 00:11:43.820 "num_base_bdevs_discovered": 2, 00:11:43.820 "num_base_bdevs_operational": 2, 00:11:43.820 "process": { 00:11:43.820 "type": "rebuild", 00:11:43.820 "target": "spare", 00:11:43.820 "progress": { 00:11:43.820 "blocks": 20480, 00:11:43.820 "percent": 31 00:11:43.820 } 00:11:43.820 }, 00:11:43.820 "base_bdevs_list": [ 00:11:43.820 { 00:11:43.820 "name": "spare", 00:11:43.820 "uuid": "a9fc4ecf-741a-53b3-b5c2-6f1c8c037421", 00:11:43.820 "is_configured": true, 00:11:43.820 "data_offset": 0, 00:11:43.820 "data_size": 65536 00:11:43.820 }, 00:11:43.820 { 00:11:43.820 "name": "BaseBdev2", 00:11:43.820 "uuid": "f6a653f3-3e87-5850-a4fe-e1d6a8d2df8c", 00:11:43.820 "is_configured": true, 00:11:43.820 "data_offset": 0, 00:11:43.820 "data_size": 65536 00:11:43.820 } 00:11:43.820 ] 00:11:43.820 }' 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.820 [2024-11-26 13:24:32.169770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:43.820 [2024-11-26 13:24:32.208537] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:43.820 [2024-11-26 13:24:32.208615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.820 [2024-11-26 13:24:32.208636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:43.820 [2024-11-26 13:24:32.208649] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.820 "name": "raid_bdev1", 00:11:43.820 "uuid": "fafc1cb1-7260-4f4a-a941-838cad5b2200", 00:11:43.820 "strip_size_kb": 0, 00:11:43.820 "state": "online", 00:11:43.820 "raid_level": "raid1", 00:11:43.820 "superblock": false, 00:11:43.820 "num_base_bdevs": 2, 00:11:43.820 "num_base_bdevs_discovered": 1, 00:11:43.820 "num_base_bdevs_operational": 1, 00:11:43.820 "base_bdevs_list": [ 00:11:43.820 { 00:11:43.820 "name": null, 00:11:43.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.820 "is_configured": false, 00:11:43.820 "data_offset": 0, 00:11:43.820 "data_size": 65536 00:11:43.820 }, 00:11:43.820 { 00:11:43.820 "name": "BaseBdev2", 00:11:43.820 "uuid": "f6a653f3-3e87-5850-a4fe-e1d6a8d2df8c", 00:11:43.820 "is_configured": true, 00:11:43.820 "data_offset": 0, 00:11:43.820 "data_size": 65536 00:11:43.820 } 00:11:43.820 ] 00:11:43.820 }' 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.820 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.388 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:44.388 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.388 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:44.388 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:44.388 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.388 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.388 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.388 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.389 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.389 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.389 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.389 "name": "raid_bdev1", 00:11:44.389 "uuid": "fafc1cb1-7260-4f4a-a941-838cad5b2200", 00:11:44.389 "strip_size_kb": 0, 00:11:44.389 "state": "online", 00:11:44.389 "raid_level": "raid1", 00:11:44.389 "superblock": false, 00:11:44.389 "num_base_bdevs": 2, 00:11:44.389 "num_base_bdevs_discovered": 1, 00:11:44.389 "num_base_bdevs_operational": 1, 00:11:44.389 "base_bdevs_list": [ 00:11:44.389 { 00:11:44.389 "name": null, 00:11:44.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.389 "is_configured": false, 00:11:44.389 "data_offset": 0, 00:11:44.389 "data_size": 65536 00:11:44.389 }, 00:11:44.389 { 00:11:44.389 "name": "BaseBdev2", 00:11:44.389 "uuid": "f6a653f3-3e87-5850-a4fe-e1d6a8d2df8c", 00:11:44.389 "is_configured": true, 00:11:44.389 "data_offset": 0, 00:11:44.389 "data_size": 65536 00:11:44.389 } 00:11:44.389 ] 00:11:44.389 }' 00:11:44.389 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.389 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:44.389 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.389 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:44.389 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:44.389 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.389 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.389 [2024-11-26 13:24:32.917493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:44.389 [2024-11-26 13:24:32.930009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:11:44.389 13:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.389 13:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:44.389 [2024-11-26 13:24:32.932404] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:45.767 13:24:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:45.767 13:24:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.767 13:24:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:45.767 13:24:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:45.767 13:24:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.767 13:24:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.767 13:24:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.767 13:24:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.767 13:24:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.767 13:24:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.767 13:24:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.767 "name": "raid_bdev1", 00:11:45.767 "uuid": "fafc1cb1-7260-4f4a-a941-838cad5b2200", 00:11:45.767 "strip_size_kb": 0, 00:11:45.767 "state": "online", 00:11:45.767 "raid_level": "raid1", 00:11:45.767 "superblock": false, 00:11:45.767 "num_base_bdevs": 2, 00:11:45.767 "num_base_bdevs_discovered": 2, 00:11:45.767 "num_base_bdevs_operational": 2, 00:11:45.767 "process": { 00:11:45.767 "type": "rebuild", 00:11:45.767 "target": "spare", 00:11:45.767 "progress": { 00:11:45.767 "blocks": 20480, 00:11:45.767 "percent": 31 00:11:45.767 } 00:11:45.767 }, 00:11:45.767 "base_bdevs_list": [ 00:11:45.767 { 00:11:45.767 "name": "spare", 00:11:45.767 "uuid": "a9fc4ecf-741a-53b3-b5c2-6f1c8c037421", 00:11:45.767 "is_configured": true, 00:11:45.767 "data_offset": 0, 00:11:45.767 "data_size": 65536 00:11:45.767 }, 00:11:45.767 { 00:11:45.767 "name": "BaseBdev2", 00:11:45.767 "uuid": "f6a653f3-3e87-5850-a4fe-e1d6a8d2df8c", 00:11:45.767 "is_configured": true, 00:11:45.767 "data_offset": 0, 00:11:45.767 "data_size": 65536 00:11:45.767 } 00:11:45.767 ] 00:11:45.767 }' 00:11:45.767 13:24:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.767 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:45.767 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.767 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=376 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.768 "name": "raid_bdev1", 00:11:45.768 "uuid": "fafc1cb1-7260-4f4a-a941-838cad5b2200", 00:11:45.768 "strip_size_kb": 0, 00:11:45.768 "state": "online", 00:11:45.768 "raid_level": "raid1", 00:11:45.768 "superblock": false, 00:11:45.768 "num_base_bdevs": 2, 00:11:45.768 "num_base_bdevs_discovered": 2, 00:11:45.768 "num_base_bdevs_operational": 2, 00:11:45.768 "process": { 00:11:45.768 "type": "rebuild", 00:11:45.768 "target": "spare", 00:11:45.768 "progress": { 00:11:45.768 "blocks": 22528, 00:11:45.768 "percent": 34 00:11:45.768 } 00:11:45.768 }, 00:11:45.768 "base_bdevs_list": [ 00:11:45.768 { 00:11:45.768 "name": "spare", 00:11:45.768 "uuid": "a9fc4ecf-741a-53b3-b5c2-6f1c8c037421", 00:11:45.768 "is_configured": true, 00:11:45.768 "data_offset": 0, 00:11:45.768 "data_size": 65536 00:11:45.768 }, 00:11:45.768 { 00:11:45.768 "name": "BaseBdev2", 00:11:45.768 "uuid": "f6a653f3-3e87-5850-a4fe-e1d6a8d2df8c", 00:11:45.768 "is_configured": true, 00:11:45.768 "data_offset": 0, 00:11:45.768 "data_size": 65536 00:11:45.768 } 00:11:45.768 ] 00:11:45.768 }' 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:45.768 13:24:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:46.705 13:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:46.705 13:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:46.705 13:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.705 13:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:46.705 13:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:46.705 13:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.705 13:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.705 13:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.705 13:24:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.705 13:24:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.964 13:24:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.964 13:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.964 "name": "raid_bdev1", 00:11:46.964 "uuid": "fafc1cb1-7260-4f4a-a941-838cad5b2200", 00:11:46.964 "strip_size_kb": 0, 00:11:46.964 "state": "online", 00:11:46.964 "raid_level": "raid1", 00:11:46.964 "superblock": false, 00:11:46.964 "num_base_bdevs": 2, 00:11:46.964 "num_base_bdevs_discovered": 2, 00:11:46.964 "num_base_bdevs_operational": 2, 00:11:46.964 "process": { 00:11:46.964 "type": "rebuild", 00:11:46.964 "target": "spare", 00:11:46.964 "progress": { 00:11:46.964 "blocks": 47104, 00:11:46.964 "percent": 71 00:11:46.964 } 00:11:46.964 }, 00:11:46.964 "base_bdevs_list": [ 00:11:46.964 { 00:11:46.964 "name": "spare", 00:11:46.964 "uuid": "a9fc4ecf-741a-53b3-b5c2-6f1c8c037421", 00:11:46.964 "is_configured": true, 00:11:46.964 "data_offset": 0, 00:11:46.964 "data_size": 65536 00:11:46.964 }, 00:11:46.964 { 00:11:46.964 "name": "BaseBdev2", 00:11:46.964 "uuid": "f6a653f3-3e87-5850-a4fe-e1d6a8d2df8c", 00:11:46.964 "is_configured": true, 00:11:46.964 "data_offset": 0, 00:11:46.964 "data_size": 65536 00:11:46.964 } 00:11:46.964 ] 00:11:46.964 }' 00:11:46.964 13:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.964 13:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:46.964 13:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.964 13:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.964 13:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:47.902 [2024-11-26 13:24:36.151618] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:47.902 [2024-11-26 13:24:36.151718] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:47.902 [2024-11-26 13:24:36.151806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.902 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:47.902 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:47.902 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:47.902 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:47.902 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:47.902 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:47.902 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.902 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.902 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.902 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.902 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.161 "name": "raid_bdev1", 00:11:48.161 "uuid": "fafc1cb1-7260-4f4a-a941-838cad5b2200", 00:11:48.161 "strip_size_kb": 0, 00:11:48.161 "state": "online", 00:11:48.161 "raid_level": "raid1", 00:11:48.161 "superblock": false, 00:11:48.161 "num_base_bdevs": 2, 00:11:48.161 "num_base_bdevs_discovered": 2, 00:11:48.161 "num_base_bdevs_operational": 2, 00:11:48.161 "base_bdevs_list": [ 00:11:48.161 { 00:11:48.161 "name": "spare", 00:11:48.161 "uuid": "a9fc4ecf-741a-53b3-b5c2-6f1c8c037421", 00:11:48.161 "is_configured": true, 00:11:48.161 "data_offset": 0, 00:11:48.161 "data_size": 65536 00:11:48.161 }, 00:11:48.161 { 00:11:48.161 "name": "BaseBdev2", 00:11:48.161 "uuid": "f6a653f3-3e87-5850-a4fe-e1d6a8d2df8c", 00:11:48.161 "is_configured": true, 00:11:48.161 "data_offset": 0, 00:11:48.161 "data_size": 65536 00:11:48.161 } 00:11:48.161 ] 00:11:48.161 }' 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.161 "name": "raid_bdev1", 00:11:48.161 "uuid": "fafc1cb1-7260-4f4a-a941-838cad5b2200", 00:11:48.161 "strip_size_kb": 0, 00:11:48.161 "state": "online", 00:11:48.161 "raid_level": "raid1", 00:11:48.161 "superblock": false, 00:11:48.161 "num_base_bdevs": 2, 00:11:48.161 "num_base_bdevs_discovered": 2, 00:11:48.161 "num_base_bdevs_operational": 2, 00:11:48.161 "base_bdevs_list": [ 00:11:48.161 { 00:11:48.161 "name": "spare", 00:11:48.161 "uuid": "a9fc4ecf-741a-53b3-b5c2-6f1c8c037421", 00:11:48.161 "is_configured": true, 00:11:48.161 "data_offset": 0, 00:11:48.161 "data_size": 65536 00:11:48.161 }, 00:11:48.161 { 00:11:48.161 "name": "BaseBdev2", 00:11:48.161 "uuid": "f6a653f3-3e87-5850-a4fe-e1d6a8d2df8c", 00:11:48.161 "is_configured": true, 00:11:48.161 "data_offset": 0, 00:11:48.161 "data_size": 65536 00:11:48.161 } 00:11:48.161 ] 00:11:48.161 }' 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:48.161 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.421 "name": "raid_bdev1", 00:11:48.421 "uuid": "fafc1cb1-7260-4f4a-a941-838cad5b2200", 00:11:48.421 "strip_size_kb": 0, 00:11:48.421 "state": "online", 00:11:48.421 "raid_level": "raid1", 00:11:48.421 "superblock": false, 00:11:48.421 "num_base_bdevs": 2, 00:11:48.421 "num_base_bdevs_discovered": 2, 00:11:48.421 "num_base_bdevs_operational": 2, 00:11:48.421 "base_bdevs_list": [ 00:11:48.421 { 00:11:48.421 "name": "spare", 00:11:48.421 "uuid": "a9fc4ecf-741a-53b3-b5c2-6f1c8c037421", 00:11:48.421 "is_configured": true, 00:11:48.421 "data_offset": 0, 00:11:48.421 "data_size": 65536 00:11:48.421 }, 00:11:48.421 { 00:11:48.421 "name": "BaseBdev2", 00:11:48.421 "uuid": "f6a653f3-3e87-5850-a4fe-e1d6a8d2df8c", 00:11:48.421 "is_configured": true, 00:11:48.421 "data_offset": 0, 00:11:48.421 "data_size": 65536 00:11:48.421 } 00:11:48.421 ] 00:11:48.421 }' 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.421 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.990 [2024-11-26 13:24:37.264416] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.990 [2024-11-26 13:24:37.264448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.990 [2024-11-26 13:24:37.264554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.990 [2024-11-26 13:24:37.264675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.990 [2024-11-26 13:24:37.264691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:48.990 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:49.250 /dev/nbd0 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.250 1+0 records in 00:11:49.250 1+0 records out 00:11:49.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459924 s, 8.9 MB/s 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:49.250 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:49.509 /dev/nbd1 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.509 1+0 records in 00:11:49.509 1+0 records out 00:11:49.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230472 s, 17.8 MB/s 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:49.509 13:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:49.769 13:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:49.769 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:49.769 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:49.769 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:49.769 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:49.769 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.769 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 74868 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 74868 ']' 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 74868 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.047 13:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74868 00:11:50.339 killing process with pid 74868 00:11:50.339 Received shutdown signal, test time was about 60.000000 seconds 00:11:50.339 00:11:50.339 Latency(us) 00:11:50.339 [2024-11-26T13:24:38.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.339 [2024-11-26T13:24:38.909Z] =================================================================================================================== 00:11:50.339 [2024-11-26T13:24:38.909Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:50.339 13:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.339 13:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.339 13:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74868' 00:11:50.339 13:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 74868 00:11:50.339 [2024-11-26 13:24:38.609839] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:50.339 13:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 74868 00:11:50.339 [2024-11-26 13:24:38.814649] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:51.284 13:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:51.284 00:11:51.284 real 0m17.053s 00:11:51.284 user 0m19.383s 00:11:51.284 sys 0m3.120s 00:11:51.284 13:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.284 13:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.285 ************************************ 00:11:51.285 END TEST raid_rebuild_test 00:11:51.285 ************************************ 00:11:51.285 13:24:39 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:51.285 13:24:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:51.285 13:24:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.285 13:24:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:51.285 ************************************ 00:11:51.285 START TEST raid_rebuild_test_sb 00:11:51.285 ************************************ 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75301 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75301 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75301 ']' 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.285 13:24:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.285 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:51.285 Zero copy mechanism will not be used. 00:11:51.285 [2024-11-26 13:24:39.831695] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:11:51.285 [2024-11-26 13:24:39.831886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75301 ] 00:11:51.544 [2024-11-26 13:24:40.014252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.803 [2024-11-26 13:24:40.112984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.803 [2024-11-26 13:24:40.284061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.803 [2024-11-26 13:24:40.284099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.370 BaseBdev1_malloc 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.370 [2024-11-26 13:24:40.820815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:52.370 [2024-11-26 13:24:40.821184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.370 [2024-11-26 13:24:40.821369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:52.370 [2024-11-26 13:24:40.821477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.370 [2024-11-26 13:24:40.823979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.370 [2024-11-26 13:24:40.824100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:52.370 BaseBdev1 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.370 BaseBdev2_malloc 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.370 [2024-11-26 13:24:40.862483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:52.370 [2024-11-26 13:24:40.862998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.370 [2024-11-26 13:24:40.863112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:52.370 [2024-11-26 13:24:40.863221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.370 [2024-11-26 13:24:40.865839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.370 [2024-11-26 13:24:40.866100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:52.370 BaseBdev2 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.370 spare_malloc 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.370 spare_delay 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.370 [2024-11-26 13:24:40.923212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:52.370 [2024-11-26 13:24:40.923389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.370 [2024-11-26 13:24:40.923479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:52.370 [2024-11-26 13:24:40.923559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.370 [2024-11-26 13:24:40.926017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.370 [2024-11-26 13:24:40.926225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:52.370 spare 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.370 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.370 [2024-11-26 13:24:40.931306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.370 [2024-11-26 13:24:40.933388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.370 [2024-11-26 13:24:40.933583] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:52.370 [2024-11-26 13:24:40.933607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:52.370 [2024-11-26 13:24:40.933854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:52.370 [2024-11-26 13:24:40.934040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:52.370 [2024-11-26 13:24:40.934054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:52.370 [2024-11-26 13:24:40.934211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.629 "name": "raid_bdev1", 00:11:52.629 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:11:52.629 "strip_size_kb": 0, 00:11:52.629 "state": "online", 00:11:52.629 "raid_level": "raid1", 00:11:52.629 "superblock": true, 00:11:52.629 "num_base_bdevs": 2, 00:11:52.629 "num_base_bdevs_discovered": 2, 00:11:52.629 "num_base_bdevs_operational": 2, 00:11:52.629 "base_bdevs_list": [ 00:11:52.629 { 00:11:52.629 "name": "BaseBdev1", 00:11:52.629 "uuid": "2c5ac3f8-9879-5d37-97b2-575ab619fcd4", 00:11:52.629 "is_configured": true, 00:11:52.629 "data_offset": 2048, 00:11:52.629 "data_size": 63488 00:11:52.629 }, 00:11:52.629 { 00:11:52.629 "name": "BaseBdev2", 00:11:52.629 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:11:52.629 "is_configured": true, 00:11:52.629 "data_offset": 2048, 00:11:52.629 "data_size": 63488 00:11:52.629 } 00:11:52.629 ] 00:11:52.629 }' 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.629 13:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.888 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:52.889 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.889 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.889 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.889 [2024-11-26 13:24:41.431666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.889 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:53.147 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:53.148 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:53.148 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:53.148 [2024-11-26 13:24:41.711440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:53.407 /dev/nbd0 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.407 1+0 records in 00:11:53.407 1+0 records out 00:11:53.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244162 s, 16.8 MB/s 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:53.407 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:58.677 63488+0 records in 00:11:58.677 63488+0 records out 00:11:58.677 32505856 bytes (33 MB, 31 MiB) copied, 5.30914 s, 6.1 MB/s 00:11:58.678 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:58.678 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:58.678 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:58.678 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:58.678 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:58.678 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.678 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:58.936 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:58.936 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:58.936 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:58.936 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.936 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.936 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:58.936 [2024-11-26 13:24:47.388520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.936 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:58.936 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.936 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:58.936 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.936 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.936 [2024-11-26 13:24:47.396642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.936 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.936 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:58.936 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.937 "name": "raid_bdev1", 00:11:58.937 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:11:58.937 "strip_size_kb": 0, 00:11:58.937 "state": "online", 00:11:58.937 "raid_level": "raid1", 00:11:58.937 "superblock": true, 00:11:58.937 "num_base_bdevs": 2, 00:11:58.937 "num_base_bdevs_discovered": 1, 00:11:58.937 "num_base_bdevs_operational": 1, 00:11:58.937 "base_bdevs_list": [ 00:11:58.937 { 00:11:58.937 "name": null, 00:11:58.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.937 "is_configured": false, 00:11:58.937 "data_offset": 0, 00:11:58.937 "data_size": 63488 00:11:58.937 }, 00:11:58.937 { 00:11:58.937 "name": "BaseBdev2", 00:11:58.937 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:11:58.937 "is_configured": true, 00:11:58.937 "data_offset": 2048, 00:11:58.937 "data_size": 63488 00:11:58.937 } 00:11:58.937 ] 00:11:58.937 }' 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.937 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.504 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:59.504 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.504 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.504 [2024-11-26 13:24:47.880719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:59.504 [2024-11-26 13:24:47.895139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:11:59.504 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.504 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:59.504 [2024-11-26 13:24:47.897615] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:00.441 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:00.441 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.441 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:00.441 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:00.441 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.441 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.441 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.441 13:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.441 13:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.441 13:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.441 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.441 "name": "raid_bdev1", 00:12:00.441 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:00.441 "strip_size_kb": 0, 00:12:00.441 "state": "online", 00:12:00.441 "raid_level": "raid1", 00:12:00.441 "superblock": true, 00:12:00.441 "num_base_bdevs": 2, 00:12:00.441 "num_base_bdevs_discovered": 2, 00:12:00.441 "num_base_bdevs_operational": 2, 00:12:00.441 "process": { 00:12:00.441 "type": "rebuild", 00:12:00.441 "target": "spare", 00:12:00.441 "progress": { 00:12:00.441 "blocks": 20480, 00:12:00.441 "percent": 32 00:12:00.441 } 00:12:00.441 }, 00:12:00.441 "base_bdevs_list": [ 00:12:00.441 { 00:12:00.441 "name": "spare", 00:12:00.441 "uuid": "4fc8679f-27b1-56c7-9d5f-a958394a78b8", 00:12:00.441 "is_configured": true, 00:12:00.441 "data_offset": 2048, 00:12:00.441 "data_size": 63488 00:12:00.441 }, 00:12:00.441 { 00:12:00.441 "name": "BaseBdev2", 00:12:00.441 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:00.442 "is_configured": true, 00:12:00.442 "data_offset": 2048, 00:12:00.442 "data_size": 63488 00:12:00.442 } 00:12:00.442 ] 00:12:00.442 }' 00:12:00.442 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.700 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:00.700 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.700 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:00.700 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:00.700 13:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.700 13:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.700 [2024-11-26 13:24:49.067218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:00.700 [2024-11-26 13:24:49.105423] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:00.700 [2024-11-26 13:24:49.105976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.700 [2024-11-26 13:24:49.106144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:00.700 [2024-11-26 13:24:49.106202] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:00.700 13:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.700 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:00.700 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.700 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.700 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.701 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.701 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:00.701 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.701 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.701 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.701 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.701 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.701 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.701 13:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.701 13:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.701 13:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.701 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.701 "name": "raid_bdev1", 00:12:00.701 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:00.701 "strip_size_kb": 0, 00:12:00.701 "state": "online", 00:12:00.701 "raid_level": "raid1", 00:12:00.701 "superblock": true, 00:12:00.701 "num_base_bdevs": 2, 00:12:00.701 "num_base_bdevs_discovered": 1, 00:12:00.701 "num_base_bdevs_operational": 1, 00:12:00.701 "base_bdevs_list": [ 00:12:00.701 { 00:12:00.701 "name": null, 00:12:00.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.701 "is_configured": false, 00:12:00.701 "data_offset": 0, 00:12:00.701 "data_size": 63488 00:12:00.701 }, 00:12:00.701 { 00:12:00.701 "name": "BaseBdev2", 00:12:00.701 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:00.701 "is_configured": true, 00:12:00.701 "data_offset": 2048, 00:12:00.701 "data_size": 63488 00:12:00.701 } 00:12:00.701 ] 00:12:00.701 }' 00:12:00.701 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.701 13:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.267 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:01.267 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.267 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:01.267 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:01.267 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.267 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.267 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.267 13:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.267 13:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.267 13:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.267 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.267 "name": "raid_bdev1", 00:12:01.267 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:01.267 "strip_size_kb": 0, 00:12:01.267 "state": "online", 00:12:01.267 "raid_level": "raid1", 00:12:01.267 "superblock": true, 00:12:01.268 "num_base_bdevs": 2, 00:12:01.268 "num_base_bdevs_discovered": 1, 00:12:01.268 "num_base_bdevs_operational": 1, 00:12:01.268 "base_bdevs_list": [ 00:12:01.268 { 00:12:01.268 "name": null, 00:12:01.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.268 "is_configured": false, 00:12:01.268 "data_offset": 0, 00:12:01.268 "data_size": 63488 00:12:01.268 }, 00:12:01.268 { 00:12:01.268 "name": "BaseBdev2", 00:12:01.268 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:01.268 "is_configured": true, 00:12:01.268 "data_offset": 2048, 00:12:01.268 "data_size": 63488 00:12:01.268 } 00:12:01.268 ] 00:12:01.268 }' 00:12:01.268 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.268 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:01.268 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.268 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:01.268 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:01.268 13:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.268 13:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.268 [2024-11-26 13:24:49.799243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:01.268 [2024-11-26 13:24:49.810489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:01.268 13:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.268 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:01.268 [2024-11-26 13:24:49.812712] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.643 "name": "raid_bdev1", 00:12:02.643 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:02.643 "strip_size_kb": 0, 00:12:02.643 "state": "online", 00:12:02.643 "raid_level": "raid1", 00:12:02.643 "superblock": true, 00:12:02.643 "num_base_bdevs": 2, 00:12:02.643 "num_base_bdevs_discovered": 2, 00:12:02.643 "num_base_bdevs_operational": 2, 00:12:02.643 "process": { 00:12:02.643 "type": "rebuild", 00:12:02.643 "target": "spare", 00:12:02.643 "progress": { 00:12:02.643 "blocks": 20480, 00:12:02.643 "percent": 32 00:12:02.643 } 00:12:02.643 }, 00:12:02.643 "base_bdevs_list": [ 00:12:02.643 { 00:12:02.643 "name": "spare", 00:12:02.643 "uuid": "4fc8679f-27b1-56c7-9d5f-a958394a78b8", 00:12:02.643 "is_configured": true, 00:12:02.643 "data_offset": 2048, 00:12:02.643 "data_size": 63488 00:12:02.643 }, 00:12:02.643 { 00:12:02.643 "name": "BaseBdev2", 00:12:02.643 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:02.643 "is_configured": true, 00:12:02.643 "data_offset": 2048, 00:12:02.643 "data_size": 63488 00:12:02.643 } 00:12:02.643 ] 00:12:02.643 }' 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:02.643 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=392 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.643 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.643 13:24:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.643 13:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.643 "name": "raid_bdev1", 00:12:02.643 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:02.643 "strip_size_kb": 0, 00:12:02.643 "state": "online", 00:12:02.643 "raid_level": "raid1", 00:12:02.643 "superblock": true, 00:12:02.643 "num_base_bdevs": 2, 00:12:02.643 "num_base_bdevs_discovered": 2, 00:12:02.643 "num_base_bdevs_operational": 2, 00:12:02.643 "process": { 00:12:02.643 "type": "rebuild", 00:12:02.643 "target": "spare", 00:12:02.643 "progress": { 00:12:02.643 "blocks": 22528, 00:12:02.643 "percent": 35 00:12:02.643 } 00:12:02.643 }, 00:12:02.643 "base_bdevs_list": [ 00:12:02.643 { 00:12:02.643 "name": "spare", 00:12:02.643 "uuid": "4fc8679f-27b1-56c7-9d5f-a958394a78b8", 00:12:02.643 "is_configured": true, 00:12:02.643 "data_offset": 2048, 00:12:02.643 "data_size": 63488 00:12:02.643 }, 00:12:02.643 { 00:12:02.643 "name": "BaseBdev2", 00:12:02.643 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:02.643 "is_configured": true, 00:12:02.643 "data_offset": 2048, 00:12:02.643 "data_size": 63488 00:12:02.643 } 00:12:02.643 ] 00:12:02.643 }' 00:12:02.643 13:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.643 13:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:02.643 13:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.643 13:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:02.643 13:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.021 "name": "raid_bdev1", 00:12:04.021 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:04.021 "strip_size_kb": 0, 00:12:04.021 "state": "online", 00:12:04.021 "raid_level": "raid1", 00:12:04.021 "superblock": true, 00:12:04.021 "num_base_bdevs": 2, 00:12:04.021 "num_base_bdevs_discovered": 2, 00:12:04.021 "num_base_bdevs_operational": 2, 00:12:04.021 "process": { 00:12:04.021 "type": "rebuild", 00:12:04.021 "target": "spare", 00:12:04.021 "progress": { 00:12:04.021 "blocks": 47104, 00:12:04.021 "percent": 74 00:12:04.021 } 00:12:04.021 }, 00:12:04.021 "base_bdevs_list": [ 00:12:04.021 { 00:12:04.021 "name": "spare", 00:12:04.021 "uuid": "4fc8679f-27b1-56c7-9d5f-a958394a78b8", 00:12:04.021 "is_configured": true, 00:12:04.021 "data_offset": 2048, 00:12:04.021 "data_size": 63488 00:12:04.021 }, 00:12:04.021 { 00:12:04.021 "name": "BaseBdev2", 00:12:04.021 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:04.021 "is_configured": true, 00:12:04.021 "data_offset": 2048, 00:12:04.021 "data_size": 63488 00:12:04.021 } 00:12:04.021 ] 00:12:04.021 }' 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:04.021 13:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:04.589 [2024-11-26 13:24:52.930537] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:04.589 [2024-11-26 13:24:52.930608] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:04.589 [2024-11-26 13:24:52.931316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.847 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:04.847 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.847 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.847 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.847 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.847 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.847 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.847 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.847 13:24:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.847 13:24:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.847 13:24:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.847 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.847 "name": "raid_bdev1", 00:12:04.847 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:04.847 "strip_size_kb": 0, 00:12:04.847 "state": "online", 00:12:04.847 "raid_level": "raid1", 00:12:04.847 "superblock": true, 00:12:04.847 "num_base_bdevs": 2, 00:12:04.847 "num_base_bdevs_discovered": 2, 00:12:04.847 "num_base_bdevs_operational": 2, 00:12:04.847 "base_bdevs_list": [ 00:12:04.848 { 00:12:04.848 "name": "spare", 00:12:04.848 "uuid": "4fc8679f-27b1-56c7-9d5f-a958394a78b8", 00:12:04.848 "is_configured": true, 00:12:04.848 "data_offset": 2048, 00:12:04.848 "data_size": 63488 00:12:04.848 }, 00:12:04.848 { 00:12:04.848 "name": "BaseBdev2", 00:12:04.848 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:04.848 "is_configured": true, 00:12:04.848 "data_offset": 2048, 00:12:04.848 "data_size": 63488 00:12:04.848 } 00:12:04.848 ] 00:12:04.848 }' 00:12:04.848 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.106 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:05.106 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.106 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:05.106 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:05.106 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:05.106 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.106 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.107 "name": "raid_bdev1", 00:12:05.107 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:05.107 "strip_size_kb": 0, 00:12:05.107 "state": "online", 00:12:05.107 "raid_level": "raid1", 00:12:05.107 "superblock": true, 00:12:05.107 "num_base_bdevs": 2, 00:12:05.107 "num_base_bdevs_discovered": 2, 00:12:05.107 "num_base_bdevs_operational": 2, 00:12:05.107 "base_bdevs_list": [ 00:12:05.107 { 00:12:05.107 "name": "spare", 00:12:05.107 "uuid": "4fc8679f-27b1-56c7-9d5f-a958394a78b8", 00:12:05.107 "is_configured": true, 00:12:05.107 "data_offset": 2048, 00:12:05.107 "data_size": 63488 00:12:05.107 }, 00:12:05.107 { 00:12:05.107 "name": "BaseBdev2", 00:12:05.107 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:05.107 "is_configured": true, 00:12:05.107 "data_offset": 2048, 00:12:05.107 "data_size": 63488 00:12:05.107 } 00:12:05.107 ] 00:12:05.107 }' 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.107 13:24:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.366 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.366 "name": "raid_bdev1", 00:12:05.366 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:05.366 "strip_size_kb": 0, 00:12:05.366 "state": "online", 00:12:05.366 "raid_level": "raid1", 00:12:05.366 "superblock": true, 00:12:05.366 "num_base_bdevs": 2, 00:12:05.366 "num_base_bdevs_discovered": 2, 00:12:05.366 "num_base_bdevs_operational": 2, 00:12:05.366 "base_bdevs_list": [ 00:12:05.366 { 00:12:05.366 "name": "spare", 00:12:05.366 "uuid": "4fc8679f-27b1-56c7-9d5f-a958394a78b8", 00:12:05.366 "is_configured": true, 00:12:05.366 "data_offset": 2048, 00:12:05.366 "data_size": 63488 00:12:05.366 }, 00:12:05.366 { 00:12:05.366 "name": "BaseBdev2", 00:12:05.366 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:05.366 "is_configured": true, 00:12:05.366 "data_offset": 2048, 00:12:05.366 "data_size": 63488 00:12:05.366 } 00:12:05.366 ] 00:12:05.366 }' 00:12:05.366 13:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.366 13:24:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.625 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:05.625 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.625 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.625 [2024-11-26 13:24:54.163102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:05.625 [2024-11-26 13:24:54.163312] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.625 [2024-11-26 13:24:54.163404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.625 [2024-11-26 13:24:54.163486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.625 [2024-11-26 13:24:54.163502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:05.625 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.625 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:05.625 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.625 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.625 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.625 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.885 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:05.885 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:05.885 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:05.885 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:05.885 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:05.885 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:05.885 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:05.885 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.885 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:05.885 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:05.885 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:05.885 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:05.885 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:06.144 /dev/nbd0 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.144 1+0 records in 00:12:06.144 1+0 records out 00:12:06.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292774 s, 14.0 MB/s 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:06.144 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:06.404 /dev/nbd1 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.404 1+0 records in 00:12:06.404 1+0 records out 00:12:06.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298052 s, 13.7 MB/s 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.404 13:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:06.664 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:06.664 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:06.664 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:06.664 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.664 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.664 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:06.664 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:06.664 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.664 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.664 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.923 [2024-11-26 13:24:55.387169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:06.923 [2024-11-26 13:24:55.387711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.923 [2024-11-26 13:24:55.387837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:06.923 [2024-11-26 13:24:55.387920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.923 [2024-11-26 13:24:55.390892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.923 [2024-11-26 13:24:55.391176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:06.923 [2024-11-26 13:24:55.391498] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:06.923 [2024-11-26 13:24:55.391757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:06.923 [2024-11-26 13:24:55.392088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.923 spare 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.923 13:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.182 [2024-11-26 13:24:55.492438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:07.182 [2024-11-26 13:24:55.492468] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:07.182 [2024-11-26 13:24:55.492732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:07.182 [2024-11-26 13:24:55.492915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:07.182 [2024-11-26 13:24:55.492930] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:07.182 [2024-11-26 13:24:55.493092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.182 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.182 "name": "raid_bdev1", 00:12:07.182 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:07.182 "strip_size_kb": 0, 00:12:07.183 "state": "online", 00:12:07.183 "raid_level": "raid1", 00:12:07.183 "superblock": true, 00:12:07.183 "num_base_bdevs": 2, 00:12:07.183 "num_base_bdevs_discovered": 2, 00:12:07.183 "num_base_bdevs_operational": 2, 00:12:07.183 "base_bdevs_list": [ 00:12:07.183 { 00:12:07.183 "name": "spare", 00:12:07.183 "uuid": "4fc8679f-27b1-56c7-9d5f-a958394a78b8", 00:12:07.183 "is_configured": true, 00:12:07.183 "data_offset": 2048, 00:12:07.183 "data_size": 63488 00:12:07.183 }, 00:12:07.183 { 00:12:07.183 "name": "BaseBdev2", 00:12:07.183 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:07.183 "is_configured": true, 00:12:07.183 "data_offset": 2048, 00:12:07.183 "data_size": 63488 00:12:07.183 } 00:12:07.183 ] 00:12:07.183 }' 00:12:07.183 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.183 13:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.442 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:07.442 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.442 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:07.442 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:07.442 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.442 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.442 13:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.442 13:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.442 13:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.442 13:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.701 "name": "raid_bdev1", 00:12:07.701 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:07.701 "strip_size_kb": 0, 00:12:07.701 "state": "online", 00:12:07.701 "raid_level": "raid1", 00:12:07.701 "superblock": true, 00:12:07.701 "num_base_bdevs": 2, 00:12:07.701 "num_base_bdevs_discovered": 2, 00:12:07.701 "num_base_bdevs_operational": 2, 00:12:07.701 "base_bdevs_list": [ 00:12:07.701 { 00:12:07.701 "name": "spare", 00:12:07.701 "uuid": "4fc8679f-27b1-56c7-9d5f-a958394a78b8", 00:12:07.701 "is_configured": true, 00:12:07.701 "data_offset": 2048, 00:12:07.701 "data_size": 63488 00:12:07.701 }, 00:12:07.701 { 00:12:07.701 "name": "BaseBdev2", 00:12:07.701 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:07.701 "is_configured": true, 00:12:07.701 "data_offset": 2048, 00:12:07.701 "data_size": 63488 00:12:07.701 } 00:12:07.701 ] 00:12:07.701 }' 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.701 [2024-11-26 13:24:56.199771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.701 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.701 "name": "raid_bdev1", 00:12:07.701 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:07.702 "strip_size_kb": 0, 00:12:07.702 "state": "online", 00:12:07.702 "raid_level": "raid1", 00:12:07.702 "superblock": true, 00:12:07.702 "num_base_bdevs": 2, 00:12:07.702 "num_base_bdevs_discovered": 1, 00:12:07.702 "num_base_bdevs_operational": 1, 00:12:07.702 "base_bdevs_list": [ 00:12:07.702 { 00:12:07.702 "name": null, 00:12:07.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.702 "is_configured": false, 00:12:07.702 "data_offset": 0, 00:12:07.702 "data_size": 63488 00:12:07.702 }, 00:12:07.702 { 00:12:07.702 "name": "BaseBdev2", 00:12:07.702 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:07.702 "is_configured": true, 00:12:07.702 "data_offset": 2048, 00:12:07.702 "data_size": 63488 00:12:07.702 } 00:12:07.702 ] 00:12:07.702 }' 00:12:07.702 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.702 13:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.269 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:08.269 13:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.269 13:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.269 [2024-11-26 13:24:56.735895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:08.269 [2024-11-26 13:24:56.736033] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:08.269 [2024-11-26 13:24:56.736059] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:08.269 [2024-11-26 13:24:56.736421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:08.269 [2024-11-26 13:24:56.749149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:08.269 13:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.269 13:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:08.269 [2024-11-26 13:24:56.751405] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:09.205 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:09.205 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.205 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:09.205 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:09.205 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.205 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.205 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.205 13:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.205 13:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.464 "name": "raid_bdev1", 00:12:09.464 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:09.464 "strip_size_kb": 0, 00:12:09.464 "state": "online", 00:12:09.464 "raid_level": "raid1", 00:12:09.464 "superblock": true, 00:12:09.464 "num_base_bdevs": 2, 00:12:09.464 "num_base_bdevs_discovered": 2, 00:12:09.464 "num_base_bdevs_operational": 2, 00:12:09.464 "process": { 00:12:09.464 "type": "rebuild", 00:12:09.464 "target": "spare", 00:12:09.464 "progress": { 00:12:09.464 "blocks": 20480, 00:12:09.464 "percent": 32 00:12:09.464 } 00:12:09.464 }, 00:12:09.464 "base_bdevs_list": [ 00:12:09.464 { 00:12:09.464 "name": "spare", 00:12:09.464 "uuid": "4fc8679f-27b1-56c7-9d5f-a958394a78b8", 00:12:09.464 "is_configured": true, 00:12:09.464 "data_offset": 2048, 00:12:09.464 "data_size": 63488 00:12:09.464 }, 00:12:09.464 { 00:12:09.464 "name": "BaseBdev2", 00:12:09.464 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:09.464 "is_configured": true, 00:12:09.464 "data_offset": 2048, 00:12:09.464 "data_size": 63488 00:12:09.464 } 00:12:09.464 ] 00:12:09.464 }' 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.464 [2024-11-26 13:24:57.917380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:09.464 [2024-11-26 13:24:57.958954] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:09.464 [2024-11-26 13:24:57.959171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.464 [2024-11-26 13:24:57.959655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:09.464 [2024-11-26 13:24:57.959716] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.464 13:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.464 13:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.723 13:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.723 "name": "raid_bdev1", 00:12:09.723 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:09.723 "strip_size_kb": 0, 00:12:09.723 "state": "online", 00:12:09.723 "raid_level": "raid1", 00:12:09.723 "superblock": true, 00:12:09.723 "num_base_bdevs": 2, 00:12:09.723 "num_base_bdevs_discovered": 1, 00:12:09.723 "num_base_bdevs_operational": 1, 00:12:09.723 "base_bdevs_list": [ 00:12:09.723 { 00:12:09.723 "name": null, 00:12:09.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.723 "is_configured": false, 00:12:09.723 "data_offset": 0, 00:12:09.723 "data_size": 63488 00:12:09.723 }, 00:12:09.723 { 00:12:09.723 "name": "BaseBdev2", 00:12:09.723 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:09.723 "is_configured": true, 00:12:09.723 "data_offset": 2048, 00:12:09.723 "data_size": 63488 00:12:09.723 } 00:12:09.723 ] 00:12:09.723 }' 00:12:09.723 13:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.723 13:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.981 13:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:09.982 13:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.982 13:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.982 [2024-11-26 13:24:58.516341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:09.982 [2024-11-26 13:24:58.516416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.982 [2024-11-26 13:24:58.516442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:09.982 [2024-11-26 13:24:58.516458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.982 [2024-11-26 13:24:58.516949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.982 [2024-11-26 13:24:58.516977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:09.982 [2024-11-26 13:24:58.517062] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:09.982 [2024-11-26 13:24:58.517085] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:09.982 [2024-11-26 13:24:58.517096] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:09.982 [2024-11-26 13:24:58.517124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:09.982 [2024-11-26 13:24:58.527708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:09.982 spare 00:12:09.982 13:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.982 13:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:09.982 [2024-11-26 13:24:58.529933] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.357 "name": "raid_bdev1", 00:12:11.357 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:11.357 "strip_size_kb": 0, 00:12:11.357 "state": "online", 00:12:11.357 "raid_level": "raid1", 00:12:11.357 "superblock": true, 00:12:11.357 "num_base_bdevs": 2, 00:12:11.357 "num_base_bdevs_discovered": 2, 00:12:11.357 "num_base_bdevs_operational": 2, 00:12:11.357 "process": { 00:12:11.357 "type": "rebuild", 00:12:11.357 "target": "spare", 00:12:11.357 "progress": { 00:12:11.357 "blocks": 20480, 00:12:11.357 "percent": 32 00:12:11.357 } 00:12:11.357 }, 00:12:11.357 "base_bdevs_list": [ 00:12:11.357 { 00:12:11.357 "name": "spare", 00:12:11.357 "uuid": "4fc8679f-27b1-56c7-9d5f-a958394a78b8", 00:12:11.357 "is_configured": true, 00:12:11.357 "data_offset": 2048, 00:12:11.357 "data_size": 63488 00:12:11.357 }, 00:12:11.357 { 00:12:11.357 "name": "BaseBdev2", 00:12:11.357 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:11.357 "is_configured": true, 00:12:11.357 "data_offset": 2048, 00:12:11.357 "data_size": 63488 00:12:11.357 } 00:12:11.357 ] 00:12:11.357 }' 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.357 [2024-11-26 13:24:59.695723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:11.357 [2024-11-26 13:24:59.736726] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:11.357 [2024-11-26 13:24:59.736939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.357 [2024-11-26 13:24:59.737070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:11.357 [2024-11-26 13:24:59.737119] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.357 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.357 "name": "raid_bdev1", 00:12:11.357 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:11.357 "strip_size_kb": 0, 00:12:11.357 "state": "online", 00:12:11.357 "raid_level": "raid1", 00:12:11.357 "superblock": true, 00:12:11.357 "num_base_bdevs": 2, 00:12:11.357 "num_base_bdevs_discovered": 1, 00:12:11.357 "num_base_bdevs_operational": 1, 00:12:11.357 "base_bdevs_list": [ 00:12:11.357 { 00:12:11.357 "name": null, 00:12:11.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.357 "is_configured": false, 00:12:11.357 "data_offset": 0, 00:12:11.357 "data_size": 63488 00:12:11.357 }, 00:12:11.357 { 00:12:11.357 "name": "BaseBdev2", 00:12:11.358 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:11.358 "is_configured": true, 00:12:11.358 "data_offset": 2048, 00:12:11.358 "data_size": 63488 00:12:11.358 } 00:12:11.358 ] 00:12:11.358 }' 00:12:11.358 13:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.358 13:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.925 "name": "raid_bdev1", 00:12:11.925 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:11.925 "strip_size_kb": 0, 00:12:11.925 "state": "online", 00:12:11.925 "raid_level": "raid1", 00:12:11.925 "superblock": true, 00:12:11.925 "num_base_bdevs": 2, 00:12:11.925 "num_base_bdevs_discovered": 1, 00:12:11.925 "num_base_bdevs_operational": 1, 00:12:11.925 "base_bdevs_list": [ 00:12:11.925 { 00:12:11.925 "name": null, 00:12:11.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.925 "is_configured": false, 00:12:11.925 "data_offset": 0, 00:12:11.925 "data_size": 63488 00:12:11.925 }, 00:12:11.925 { 00:12:11.925 "name": "BaseBdev2", 00:12:11.925 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:11.925 "is_configured": true, 00:12:11.925 "data_offset": 2048, 00:12:11.925 "data_size": 63488 00:12:11.925 } 00:12:11.925 ] 00:12:11.925 }' 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.925 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.926 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:11.926 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.926 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.926 [2024-11-26 13:25:00.457241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:11.926 [2024-11-26 13:25:00.457305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.926 [2024-11-26 13:25:00.457343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:11.926 [2024-11-26 13:25:00.457367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.926 [2024-11-26 13:25:00.457838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.926 [2024-11-26 13:25:00.457867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:11.926 [2024-11-26 13:25:00.457984] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:11.926 [2024-11-26 13:25:00.458004] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:11.926 [2024-11-26 13:25:00.458019] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:11.926 [2024-11-26 13:25:00.458030] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:11.926 BaseBdev1 00:12:11.926 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.926 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.303 "name": "raid_bdev1", 00:12:13.303 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:13.303 "strip_size_kb": 0, 00:12:13.303 "state": "online", 00:12:13.303 "raid_level": "raid1", 00:12:13.303 "superblock": true, 00:12:13.303 "num_base_bdevs": 2, 00:12:13.303 "num_base_bdevs_discovered": 1, 00:12:13.303 "num_base_bdevs_operational": 1, 00:12:13.303 "base_bdevs_list": [ 00:12:13.303 { 00:12:13.303 "name": null, 00:12:13.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.303 "is_configured": false, 00:12:13.303 "data_offset": 0, 00:12:13.303 "data_size": 63488 00:12:13.303 }, 00:12:13.303 { 00:12:13.303 "name": "BaseBdev2", 00:12:13.303 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:13.303 "is_configured": true, 00:12:13.303 "data_offset": 2048, 00:12:13.303 "data_size": 63488 00:12:13.303 } 00:12:13.303 ] 00:12:13.303 }' 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.303 13:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.562 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:13.562 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.562 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:13.562 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:13.562 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.562 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.562 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.562 13:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.562 13:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.562 13:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.562 13:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.562 "name": "raid_bdev1", 00:12:13.562 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:13.562 "strip_size_kb": 0, 00:12:13.562 "state": "online", 00:12:13.562 "raid_level": "raid1", 00:12:13.562 "superblock": true, 00:12:13.562 "num_base_bdevs": 2, 00:12:13.562 "num_base_bdevs_discovered": 1, 00:12:13.562 "num_base_bdevs_operational": 1, 00:12:13.562 "base_bdevs_list": [ 00:12:13.562 { 00:12:13.562 "name": null, 00:12:13.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.562 "is_configured": false, 00:12:13.562 "data_offset": 0, 00:12:13.562 "data_size": 63488 00:12:13.562 }, 00:12:13.562 { 00:12:13.562 "name": "BaseBdev2", 00:12:13.562 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:13.562 "is_configured": true, 00:12:13.562 "data_offset": 2048, 00:12:13.562 "data_size": 63488 00:12:13.562 } 00:12:13.562 ] 00:12:13.562 }' 00:12:13.562 13:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.563 13:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:13.563 13:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.821 [2024-11-26 13:25:02.149657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.821 [2024-11-26 13:25:02.149776] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:13.821 [2024-11-26 13:25:02.149797] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:13.821 request: 00:12:13.821 { 00:12:13.821 "base_bdev": "BaseBdev1", 00:12:13.821 "raid_bdev": "raid_bdev1", 00:12:13.821 "method": "bdev_raid_add_base_bdev", 00:12:13.821 "req_id": 1 00:12:13.821 } 00:12:13.821 Got JSON-RPC error response 00:12:13.821 response: 00:12:13.821 { 00:12:13.821 "code": -22, 00:12:13.821 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:13.821 } 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:13.821 13:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.757 "name": "raid_bdev1", 00:12:14.757 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:14.757 "strip_size_kb": 0, 00:12:14.757 "state": "online", 00:12:14.757 "raid_level": "raid1", 00:12:14.757 "superblock": true, 00:12:14.757 "num_base_bdevs": 2, 00:12:14.757 "num_base_bdevs_discovered": 1, 00:12:14.757 "num_base_bdevs_operational": 1, 00:12:14.757 "base_bdevs_list": [ 00:12:14.757 { 00:12:14.757 "name": null, 00:12:14.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.757 "is_configured": false, 00:12:14.757 "data_offset": 0, 00:12:14.757 "data_size": 63488 00:12:14.757 }, 00:12:14.757 { 00:12:14.757 "name": "BaseBdev2", 00:12:14.757 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:14.757 "is_configured": true, 00:12:14.757 "data_offset": 2048, 00:12:14.757 "data_size": 63488 00:12:14.757 } 00:12:14.757 ] 00:12:14.757 }' 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.757 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.325 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:15.325 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.325 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:15.325 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:15.325 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.325 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.325 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.325 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.325 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.325 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.325 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.325 "name": "raid_bdev1", 00:12:15.325 "uuid": "e25d5c79-ccda-4a2f-9731-49dc41298995", 00:12:15.325 "strip_size_kb": 0, 00:12:15.325 "state": "online", 00:12:15.325 "raid_level": "raid1", 00:12:15.325 "superblock": true, 00:12:15.325 "num_base_bdevs": 2, 00:12:15.325 "num_base_bdevs_discovered": 1, 00:12:15.325 "num_base_bdevs_operational": 1, 00:12:15.325 "base_bdevs_list": [ 00:12:15.325 { 00:12:15.325 "name": null, 00:12:15.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.326 "is_configured": false, 00:12:15.326 "data_offset": 0, 00:12:15.326 "data_size": 63488 00:12:15.326 }, 00:12:15.326 { 00:12:15.326 "name": "BaseBdev2", 00:12:15.326 "uuid": "f5f0f6fe-bb0e-56c2-a913-00b264bcae4c", 00:12:15.326 "is_configured": true, 00:12:15.326 "data_offset": 2048, 00:12:15.326 "data_size": 63488 00:12:15.326 } 00:12:15.326 ] 00:12:15.326 }' 00:12:15.326 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.326 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:15.326 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.326 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:15.326 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75301 00:12:15.326 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75301 ']' 00:12:15.326 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75301 00:12:15.326 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:15.326 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.326 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75301 00:12:15.326 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.326 killing process with pid 75301 00:12:15.326 Received shutdown signal, test time was about 60.000000 seconds 00:12:15.326 00:12:15.326 Latency(us) 00:12:15.326 [2024-11-26T13:25:03.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.326 [2024-11-26T13:25:03.896Z] =================================================================================================================== 00:12:15.326 [2024-11-26T13:25:03.896Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:15.326 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.326 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75301' 00:12:15.326 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75301 00:12:15.326 [2024-11-26 13:25:03.877837] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.326 13:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75301 00:12:15.326 [2024-11-26 13:25:03.877952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.326 [2024-11-26 13:25:03.878007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.326 [2024-11-26 13:25:03.878024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:15.585 [2024-11-26 13:25:04.083119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:16.520 ************************************ 00:12:16.520 END TEST raid_rebuild_test_sb 00:12:16.520 ************************************ 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:16.520 00:12:16.520 real 0m25.215s 00:12:16.520 user 0m31.495s 00:12:16.520 sys 0m3.648s 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.520 13:25:04 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:16.520 13:25:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:16.520 13:25:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.520 13:25:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:16.520 ************************************ 00:12:16.520 START TEST raid_rebuild_test_io 00:12:16.520 ************************************ 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:16.520 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:16.521 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:16.521 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:16.521 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:16.521 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76055 00:12:16.521 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:16.521 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76055 00:12:16.521 13:25:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76055 ']' 00:12:16.521 13:25:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.521 13:25:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.521 13:25:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.521 13:25:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.521 13:25:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.779 [2024-11-26 13:25:05.106112] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:12:16.779 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:16.779 Zero copy mechanism will not be used. 00:12:16.779 [2024-11-26 13:25:05.106576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76055 ] 00:12:16.779 [2024-11-26 13:25:05.289903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.037 [2024-11-26 13:25:05.388135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.037 [2024-11-26 13:25:05.559956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.037 [2024-11-26 13:25:05.560019] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.605 BaseBdev1_malloc 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.605 [2024-11-26 13:25:06.061265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:17.605 [2024-11-26 13:25:06.061477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.605 [2024-11-26 13:25:06.061516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:17.605 [2024-11-26 13:25:06.061534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.605 [2024-11-26 13:25:06.063988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.605 [2024-11-26 13:25:06.064034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:17.605 BaseBdev1 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.605 BaseBdev2_malloc 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.605 [2024-11-26 13:25:06.102902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:17.605 [2024-11-26 13:25:06.102963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.605 [2024-11-26 13:25:06.102988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:17.605 [2024-11-26 13:25:06.103005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.605 [2024-11-26 13:25:06.105411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.605 [2024-11-26 13:25:06.105471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:17.605 BaseBdev2 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.605 spare_malloc 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.605 spare_delay 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.605 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.605 [2024-11-26 13:25:06.163913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:17.605 [2024-11-26 13:25:06.163975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.605 [2024-11-26 13:25:06.164001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:17.605 [2024-11-26 13:25:06.164015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.605 [2024-11-26 13:25:06.166470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.605 [2024-11-26 13:25:06.166515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:17.864 spare 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.864 [2024-11-26 13:25:06.171972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.864 [2024-11-26 13:25:06.174093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.864 [2024-11-26 13:25:06.174204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:17.864 [2024-11-26 13:25:06.174224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:17.864 [2024-11-26 13:25:06.174501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:17.864 [2024-11-26 13:25:06.174741] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:17.864 [2024-11-26 13:25:06.174767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:17.864 [2024-11-26 13:25:06.174938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.864 "name": "raid_bdev1", 00:12:17.864 "uuid": "417872ec-6b66-4f96-9cef-8991aa726571", 00:12:17.864 "strip_size_kb": 0, 00:12:17.864 "state": "online", 00:12:17.864 "raid_level": "raid1", 00:12:17.864 "superblock": false, 00:12:17.864 "num_base_bdevs": 2, 00:12:17.864 "num_base_bdevs_discovered": 2, 00:12:17.864 "num_base_bdevs_operational": 2, 00:12:17.864 "base_bdevs_list": [ 00:12:17.864 { 00:12:17.864 "name": "BaseBdev1", 00:12:17.864 "uuid": "0e24d866-5283-5531-8fda-fef5e9e1390a", 00:12:17.864 "is_configured": true, 00:12:17.864 "data_offset": 0, 00:12:17.864 "data_size": 65536 00:12:17.864 }, 00:12:17.864 { 00:12:17.864 "name": "BaseBdev2", 00:12:17.864 "uuid": "5457096a-5138-5eb0-9f96-3a1ecb6f837e", 00:12:17.864 "is_configured": true, 00:12:17.864 "data_offset": 0, 00:12:17.864 "data_size": 65536 00:12:17.864 } 00:12:17.864 ] 00:12:17.864 }' 00:12:17.864 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.865 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.432 [2024-11-26 13:25:06.712348] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.432 [2024-11-26 13:25:06.808043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.432 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.433 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.433 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.433 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.433 "name": "raid_bdev1", 00:12:18.433 "uuid": "417872ec-6b66-4f96-9cef-8991aa726571", 00:12:18.433 "strip_size_kb": 0, 00:12:18.433 "state": "online", 00:12:18.433 "raid_level": "raid1", 00:12:18.433 "superblock": false, 00:12:18.433 "num_base_bdevs": 2, 00:12:18.433 "num_base_bdevs_discovered": 1, 00:12:18.433 "num_base_bdevs_operational": 1, 00:12:18.433 "base_bdevs_list": [ 00:12:18.433 { 00:12:18.433 "name": null, 00:12:18.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.433 "is_configured": false, 00:12:18.433 "data_offset": 0, 00:12:18.433 "data_size": 65536 00:12:18.433 }, 00:12:18.433 { 00:12:18.433 "name": "BaseBdev2", 00:12:18.433 "uuid": "5457096a-5138-5eb0-9f96-3a1ecb6f837e", 00:12:18.433 "is_configured": true, 00:12:18.433 "data_offset": 0, 00:12:18.433 "data_size": 65536 00:12:18.433 } 00:12:18.433 ] 00:12:18.433 }' 00:12:18.433 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.433 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.433 [2024-11-26 13:25:06.931734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:18.433 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:18.433 Zero copy mechanism will not be used. 00:12:18.433 Running I/O for 60 seconds... 00:12:19.001 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:19.001 13:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.001 13:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.001 [2024-11-26 13:25:07.327128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:19.001 13:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.001 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:19.001 [2024-11-26 13:25:07.387708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:19.001 [2024-11-26 13:25:07.389906] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:19.001 [2024-11-26 13:25:07.503960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:19.001 [2024-11-26 13:25:07.504326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:19.259 [2024-11-26 13:25:07.724604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:19.259 [2024-11-26 13:25:07.724739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:19.518 168.00 IOPS, 504.00 MiB/s [2024-11-26T13:25:08.088Z] [2024-11-26 13:25:07.953558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:19.518 [2024-11-26 13:25:07.953877] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:19.776 [2024-11-26 13:25:08.193998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:19.776 [2024-11-26 13:25:08.194196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.034 "name": "raid_bdev1", 00:12:20.034 "uuid": "417872ec-6b66-4f96-9cef-8991aa726571", 00:12:20.034 "strip_size_kb": 0, 00:12:20.034 "state": "online", 00:12:20.034 "raid_level": "raid1", 00:12:20.034 "superblock": false, 00:12:20.034 "num_base_bdevs": 2, 00:12:20.034 "num_base_bdevs_discovered": 2, 00:12:20.034 "num_base_bdevs_operational": 2, 00:12:20.034 "process": { 00:12:20.034 "type": "rebuild", 00:12:20.034 "target": "spare", 00:12:20.034 "progress": { 00:12:20.034 "blocks": 10240, 00:12:20.034 "percent": 15 00:12:20.034 } 00:12:20.034 }, 00:12:20.034 "base_bdevs_list": [ 00:12:20.034 { 00:12:20.034 "name": "spare", 00:12:20.034 "uuid": "d7a43bc3-037e-5216-a6b2-95bd0cf3a810", 00:12:20.034 "is_configured": true, 00:12:20.034 "data_offset": 0, 00:12:20.034 "data_size": 65536 00:12:20.034 }, 00:12:20.034 { 00:12:20.034 "name": "BaseBdev2", 00:12:20.034 "uuid": "5457096a-5138-5eb0-9f96-3a1ecb6f837e", 00:12:20.034 "is_configured": true, 00:12:20.034 "data_offset": 0, 00:12:20.034 "data_size": 65536 00:12:20.034 } 00:12:20.034 ] 00:12:20.034 }' 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.034 [2024-11-26 13:25:08.517383] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.034 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.034 [2024-11-26 13:25:08.533808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:20.292 [2024-11-26 13:25:08.626280] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:20.292 [2024-11-26 13:25:08.633865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.292 [2024-11-26 13:25:08.634046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:20.292 [2024-11-26 13:25:08.634071] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:20.292 [2024-11-26 13:25:08.679573] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.292 "name": "raid_bdev1", 00:12:20.292 "uuid": "417872ec-6b66-4f96-9cef-8991aa726571", 00:12:20.292 "strip_size_kb": 0, 00:12:20.292 "state": "online", 00:12:20.292 "raid_level": "raid1", 00:12:20.292 "superblock": false, 00:12:20.292 "num_base_bdevs": 2, 00:12:20.292 "num_base_bdevs_discovered": 1, 00:12:20.292 "num_base_bdevs_operational": 1, 00:12:20.292 "base_bdevs_list": [ 00:12:20.292 { 00:12:20.292 "name": null, 00:12:20.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.292 "is_configured": false, 00:12:20.292 "data_offset": 0, 00:12:20.292 "data_size": 65536 00:12:20.292 }, 00:12:20.292 { 00:12:20.292 "name": "BaseBdev2", 00:12:20.292 "uuid": "5457096a-5138-5eb0-9f96-3a1ecb6f837e", 00:12:20.292 "is_configured": true, 00:12:20.292 "data_offset": 0, 00:12:20.292 "data_size": 65536 00:12:20.292 } 00:12:20.292 ] 00:12:20.292 }' 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.292 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.863 167.00 IOPS, 501.00 MiB/s [2024-11-26T13:25:09.433Z] 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.863 "name": "raid_bdev1", 00:12:20.863 "uuid": "417872ec-6b66-4f96-9cef-8991aa726571", 00:12:20.863 "strip_size_kb": 0, 00:12:20.863 "state": "online", 00:12:20.863 "raid_level": "raid1", 00:12:20.863 "superblock": false, 00:12:20.863 "num_base_bdevs": 2, 00:12:20.863 "num_base_bdevs_discovered": 1, 00:12:20.863 "num_base_bdevs_operational": 1, 00:12:20.863 "base_bdevs_list": [ 00:12:20.863 { 00:12:20.863 "name": null, 00:12:20.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.863 "is_configured": false, 00:12:20.863 "data_offset": 0, 00:12:20.863 "data_size": 65536 00:12:20.863 }, 00:12:20.863 { 00:12:20.863 "name": "BaseBdev2", 00:12:20.863 "uuid": "5457096a-5138-5eb0-9f96-3a1ecb6f837e", 00:12:20.863 "is_configured": true, 00:12:20.863 "data_offset": 0, 00:12:20.863 "data_size": 65536 00:12:20.863 } 00:12:20.863 ] 00:12:20.863 }' 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.863 [2024-11-26 13:25:09.373929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.863 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:21.138 [2024-11-26 13:25:09.426826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:21.138 [2024-11-26 13:25:09.429098] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:21.138 [2024-11-26 13:25:09.552704] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:21.138 [2024-11-26 13:25:09.700398] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:21.688 187.67 IOPS, 563.00 MiB/s [2024-11-26T13:25:10.258Z] [2024-11-26 13:25:10.041942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:21.688 [2024-11-26 13:25:10.042258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:21.946 [2024-11-26 13:25:10.282791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:21.946 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.946 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.946 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.946 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.946 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.946 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.946 13:25:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.946 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.946 13:25:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.946 13:25:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.946 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.946 "name": "raid_bdev1", 00:12:21.946 "uuid": "417872ec-6b66-4f96-9cef-8991aa726571", 00:12:21.946 "strip_size_kb": 0, 00:12:21.946 "state": "online", 00:12:21.946 "raid_level": "raid1", 00:12:21.946 "superblock": false, 00:12:21.946 "num_base_bdevs": 2, 00:12:21.946 "num_base_bdevs_discovered": 2, 00:12:21.946 "num_base_bdevs_operational": 2, 00:12:21.946 "process": { 00:12:21.946 "type": "rebuild", 00:12:21.946 "target": "spare", 00:12:21.946 "progress": { 00:12:21.946 "blocks": 12288, 00:12:21.946 "percent": 18 00:12:21.946 } 00:12:21.946 }, 00:12:21.946 "base_bdevs_list": [ 00:12:21.946 { 00:12:21.946 "name": "spare", 00:12:21.946 "uuid": "d7a43bc3-037e-5216-a6b2-95bd0cf3a810", 00:12:21.946 "is_configured": true, 00:12:21.946 "data_offset": 0, 00:12:21.946 "data_size": 65536 00:12:21.946 }, 00:12:21.946 { 00:12:21.946 "name": "BaseBdev2", 00:12:21.946 "uuid": "5457096a-5138-5eb0-9f96-3a1ecb6f837e", 00:12:21.946 "is_configured": true, 00:12:21.946 "data_offset": 0, 00:12:21.946 "data_size": 65536 00:12:21.946 } 00:12:21.946 ] 00:12:21.946 }' 00:12:21.946 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.205 [2024-11-26 13:25:10.540693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:22.205 [2024-11-26 13:25:10.541441] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=412 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.205 "name": "raid_bdev1", 00:12:22.205 "uuid": "417872ec-6b66-4f96-9cef-8991aa726571", 00:12:22.205 "strip_size_kb": 0, 00:12:22.205 "state": "online", 00:12:22.205 "raid_level": "raid1", 00:12:22.205 "superblock": false, 00:12:22.205 "num_base_bdevs": 2, 00:12:22.205 "num_base_bdevs_discovered": 2, 00:12:22.205 "num_base_bdevs_operational": 2, 00:12:22.205 "process": { 00:12:22.205 "type": "rebuild", 00:12:22.205 "target": "spare", 00:12:22.205 "progress": { 00:12:22.205 "blocks": 14336, 00:12:22.205 "percent": 21 00:12:22.205 } 00:12:22.205 }, 00:12:22.205 "base_bdevs_list": [ 00:12:22.205 { 00:12:22.205 "name": "spare", 00:12:22.205 "uuid": "d7a43bc3-037e-5216-a6b2-95bd0cf3a810", 00:12:22.205 "is_configured": true, 00:12:22.205 "data_offset": 0, 00:12:22.205 "data_size": 65536 00:12:22.205 }, 00:12:22.205 { 00:12:22.205 "name": "BaseBdev2", 00:12:22.205 "uuid": "5457096a-5138-5eb0-9f96-3a1ecb6f837e", 00:12:22.205 "is_configured": true, 00:12:22.205 "data_offset": 0, 00:12:22.205 "data_size": 65536 00:12:22.205 } 00:12:22.205 ] 00:12:22.205 }' 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:22.205 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:22.463 [2024-11-26 13:25:10.769383] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:22.722 154.00 IOPS, 462.00 MiB/s [2024-11-26T13:25:11.292Z] [2024-11-26 13:25:11.217873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:23.288 [2024-11-26 13:25:11.559141] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:23.288 [2024-11-26 13:25:11.559711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:23.288 13:25:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:23.288 13:25:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.288 13:25:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.288 13:25:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.288 13:25:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.288 13:25:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.288 13:25:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.288 13:25:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.288 13:25:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.288 13:25:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.288 13:25:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.288 [2024-11-26 13:25:11.782290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:23.288 13:25:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.288 "name": "raid_bdev1", 00:12:23.288 "uuid": "417872ec-6b66-4f96-9cef-8991aa726571", 00:12:23.288 "strip_size_kb": 0, 00:12:23.288 "state": "online", 00:12:23.288 "raid_level": "raid1", 00:12:23.288 "superblock": false, 00:12:23.288 "num_base_bdevs": 2, 00:12:23.288 "num_base_bdevs_discovered": 2, 00:12:23.288 "num_base_bdevs_operational": 2, 00:12:23.288 "process": { 00:12:23.288 "type": "rebuild", 00:12:23.288 "target": "spare", 00:12:23.288 "progress": { 00:12:23.288 "blocks": 26624, 00:12:23.288 "percent": 40 00:12:23.288 } 00:12:23.288 }, 00:12:23.288 "base_bdevs_list": [ 00:12:23.288 { 00:12:23.288 "name": "spare", 00:12:23.288 "uuid": "d7a43bc3-037e-5216-a6b2-95bd0cf3a810", 00:12:23.288 "is_configured": true, 00:12:23.288 "data_offset": 0, 00:12:23.288 "data_size": 65536 00:12:23.288 }, 00:12:23.288 { 00:12:23.288 "name": "BaseBdev2", 00:12:23.288 "uuid": "5457096a-5138-5eb0-9f96-3a1ecb6f837e", 00:12:23.288 "is_configured": true, 00:12:23.288 "data_offset": 0, 00:12:23.288 "data_size": 65536 00:12:23.288 } 00:12:23.288 ] 00:12:23.289 }' 00:12:23.289 13:25:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.546 13:25:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.546 13:25:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.546 13:25:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.546 13:25:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:24.479 131.40 IOPS, 394.20 MiB/s [2024-11-26T13:25:13.049Z] [2024-11-26 13:25:12.808356] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:24.479 13:25:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:24.479 13:25:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.479 13:25:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.479 13:25:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.479 13:25:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.479 13:25:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.479 13:25:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.479 13:25:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.479 13:25:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.479 13:25:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.479 116.17 IOPS, 348.50 MiB/s [2024-11-26T13:25:13.049Z] 13:25:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.479 13:25:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.479 "name": "raid_bdev1", 00:12:24.479 "uuid": "417872ec-6b66-4f96-9cef-8991aa726571", 00:12:24.479 "strip_size_kb": 0, 00:12:24.479 "state": "online", 00:12:24.479 "raid_level": "raid1", 00:12:24.479 "superblock": false, 00:12:24.479 "num_base_bdevs": 2, 00:12:24.479 "num_base_bdevs_discovered": 2, 00:12:24.479 "num_base_bdevs_operational": 2, 00:12:24.479 "process": { 00:12:24.479 "type": "rebuild", 00:12:24.479 "target": "spare", 00:12:24.479 "progress": { 00:12:24.479 "blocks": 47104, 00:12:24.479 "percent": 71 00:12:24.479 } 00:12:24.479 }, 00:12:24.479 "base_bdevs_list": [ 00:12:24.479 { 00:12:24.479 "name": "spare", 00:12:24.479 "uuid": "d7a43bc3-037e-5216-a6b2-95bd0cf3a810", 00:12:24.479 "is_configured": true, 00:12:24.479 "data_offset": 0, 00:12:24.479 "data_size": 65536 00:12:24.479 }, 00:12:24.479 { 00:12:24.479 "name": "BaseBdev2", 00:12:24.479 "uuid": "5457096a-5138-5eb0-9f96-3a1ecb6f837e", 00:12:24.479 "is_configured": true, 00:12:24.479 "data_offset": 0, 00:12:24.479 "data_size": 65536 00:12:24.479 } 00:12:24.479 ] 00:12:24.479 }' 00:12:24.479 13:25:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.479 13:25:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:24.479 13:25:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.738 13:25:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:24.738 13:25:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:25.674 [2024-11-26 13:25:13.914885] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:25.674 104.43 IOPS, 313.29 MiB/s [2024-11-26T13:25:14.244Z] [2024-11-26 13:25:14.014909] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:25.674 [2024-11-26 13:25:14.016492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.674 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:25.674 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.674 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.674 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.674 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.674 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.674 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.674 13:25:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.674 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.674 13:25:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.674 13:25:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.674 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.674 "name": "raid_bdev1", 00:12:25.674 "uuid": "417872ec-6b66-4f96-9cef-8991aa726571", 00:12:25.674 "strip_size_kb": 0, 00:12:25.674 "state": "online", 00:12:25.674 "raid_level": "raid1", 00:12:25.674 "superblock": false, 00:12:25.674 "num_base_bdevs": 2, 00:12:25.674 "num_base_bdevs_discovered": 2, 00:12:25.674 "num_base_bdevs_operational": 2, 00:12:25.674 "base_bdevs_list": [ 00:12:25.674 { 00:12:25.674 "name": "spare", 00:12:25.674 "uuid": "d7a43bc3-037e-5216-a6b2-95bd0cf3a810", 00:12:25.674 "is_configured": true, 00:12:25.674 "data_offset": 0, 00:12:25.674 "data_size": 65536 00:12:25.674 }, 00:12:25.674 { 00:12:25.674 "name": "BaseBdev2", 00:12:25.674 "uuid": "5457096a-5138-5eb0-9f96-3a1ecb6f837e", 00:12:25.674 "is_configured": true, 00:12:25.674 "data_offset": 0, 00:12:25.674 "data_size": 65536 00:12:25.674 } 00:12:25.674 ] 00:12:25.674 }' 00:12:25.674 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.674 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:25.674 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.933 "name": "raid_bdev1", 00:12:25.933 "uuid": "417872ec-6b66-4f96-9cef-8991aa726571", 00:12:25.933 "strip_size_kb": 0, 00:12:25.933 "state": "online", 00:12:25.933 "raid_level": "raid1", 00:12:25.933 "superblock": false, 00:12:25.933 "num_base_bdevs": 2, 00:12:25.933 "num_base_bdevs_discovered": 2, 00:12:25.933 "num_base_bdevs_operational": 2, 00:12:25.933 "base_bdevs_list": [ 00:12:25.933 { 00:12:25.933 "name": "spare", 00:12:25.933 "uuid": "d7a43bc3-037e-5216-a6b2-95bd0cf3a810", 00:12:25.933 "is_configured": true, 00:12:25.933 "data_offset": 0, 00:12:25.933 "data_size": 65536 00:12:25.933 }, 00:12:25.933 { 00:12:25.933 "name": "BaseBdev2", 00:12:25.933 "uuid": "5457096a-5138-5eb0-9f96-3a1ecb6f837e", 00:12:25.933 "is_configured": true, 00:12:25.933 "data_offset": 0, 00:12:25.933 "data_size": 65536 00:12:25.933 } 00:12:25.933 ] 00:12:25.933 }' 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.933 "name": "raid_bdev1", 00:12:25.933 "uuid": "417872ec-6b66-4f96-9cef-8991aa726571", 00:12:25.933 "strip_size_kb": 0, 00:12:25.933 "state": "online", 00:12:25.933 "raid_level": "raid1", 00:12:25.933 "superblock": false, 00:12:25.933 "num_base_bdevs": 2, 00:12:25.933 "num_base_bdevs_discovered": 2, 00:12:25.933 "num_base_bdevs_operational": 2, 00:12:25.933 "base_bdevs_list": [ 00:12:25.933 { 00:12:25.933 "name": "spare", 00:12:25.933 "uuid": "d7a43bc3-037e-5216-a6b2-95bd0cf3a810", 00:12:25.933 "is_configured": true, 00:12:25.933 "data_offset": 0, 00:12:25.933 "data_size": 65536 00:12:25.933 }, 00:12:25.933 { 00:12:25.933 "name": "BaseBdev2", 00:12:25.933 "uuid": "5457096a-5138-5eb0-9f96-3a1ecb6f837e", 00:12:25.933 "is_configured": true, 00:12:25.933 "data_offset": 0, 00:12:25.933 "data_size": 65536 00:12:25.933 } 00:12:25.933 ] 00:12:25.933 }' 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.933 13:25:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.501 13:25:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:26.501 13:25:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.501 13:25:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.501 [2024-11-26 13:25:14.922310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.501 [2024-11-26 13:25:14.922508] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.501 95.12 IOPS, 285.38 MiB/s 00:12:26.501 Latency(us) 00:12:26.501 [2024-11-26T13:25:15.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.501 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:26.501 raid_bdev1 : 8.05 94.81 284.42 0.00 0.00 15146.24 271.83 113913.48 00:12:26.501 [2024-11-26T13:25:15.071Z] =================================================================================================================== 00:12:26.501 [2024-11-26T13:25:15.071Z] Total : 94.81 284.42 0.00 0.00 15146.24 271.83 113913.48 00:12:26.501 { 00:12:26.501 "results": [ 00:12:26.501 { 00:12:26.501 "job": "raid_bdev1", 00:12:26.501 "core_mask": "0x1", 00:12:26.501 "workload": "randrw", 00:12:26.501 "percentage": 50, 00:12:26.501 "status": "finished", 00:12:26.501 "queue_depth": 2, 00:12:26.501 "io_size": 3145728, 00:12:26.501 "runtime": 8.048066, 00:12:26.501 "iops": 94.80538554231538, 00:12:26.501 "mibps": 284.41615662694613, 00:12:26.501 "io_failed": 0, 00:12:26.501 "io_timeout": 0, 00:12:26.501 "avg_latency_us": 15146.244727749314, 00:12:26.501 "min_latency_us": 271.82545454545453, 00:12:26.501 "max_latency_us": 113913.48363636364 00:12:26.501 } 00:12:26.501 ], 00:12:26.501 "core_count": 1 00:12:26.501 } 00:12:26.501 [2024-11-26 13:25:14.997363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.501 [2024-11-26 13:25:14.997405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.501 [2024-11-26 13:25:14.997498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.501 [2024-11-26 13:25:14.997513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:26.501 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:27.069 /dev/nbd0 00:12:27.069 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:27.069 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:27.069 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:27.069 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:27.069 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:27.069 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:27.069 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:27.069 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:27.069 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:27.069 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:27.069 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.069 1+0 records in 00:12:27.069 1+0 records out 00:12:27.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581298 s, 7.0 MB/s 00:12:27.069 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:27.070 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:27.330 /dev/nbd1 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.330 1+0 records in 00:12:27.330 1+0 records out 00:12:27.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032398 s, 12.6 MB/s 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.330 13:25:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76055 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76055 ']' 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76055 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76055 00:12:27.898 killing process with pid 76055 00:12:27.898 Received shutdown signal, test time was about 9.491747 seconds 00:12:27.898 00:12:27.898 Latency(us) 00:12:27.898 [2024-11-26T13:25:16.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.898 [2024-11-26T13:25:16.468Z] =================================================================================================================== 00:12:27.898 [2024-11-26T13:25:16.468Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76055' 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76055 00:12:27.898 [2024-11-26 13:25:16.425896] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.898 13:25:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76055 00:12:28.157 [2024-11-26 13:25:16.581592] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:29.095 00:12:29.095 real 0m12.478s 00:12:29.095 user 0m16.507s 00:12:29.095 sys 0m1.336s 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.095 ************************************ 00:12:29.095 END TEST raid_rebuild_test_io 00:12:29.095 ************************************ 00:12:29.095 13:25:17 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:29.095 13:25:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:29.095 13:25:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.095 13:25:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.095 ************************************ 00:12:29.095 START TEST raid_rebuild_test_sb_io 00:12:29.095 ************************************ 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:29.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76432 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76432 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76432 ']' 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.095 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.095 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:29.095 Zero copy mechanism will not be used. 00:12:29.095 [2024-11-26 13:25:17.642600] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:12:29.095 [2024-11-26 13:25:17.642799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76432 ] 00:12:29.355 [2024-11-26 13:25:17.821410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.355 [2024-11-26 13:25:17.919478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.614 [2024-11-26 13:25:18.088329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.614 [2024-11-26 13:25:18.088618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.183 BaseBdev1_malloc 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.183 [2024-11-26 13:25:18.557408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:30.183 [2024-11-26 13:25:18.557492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.183 [2024-11-26 13:25:18.557522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:30.183 [2024-11-26 13:25:18.557538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.183 [2024-11-26 13:25:18.559824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.183 [2024-11-26 13:25:18.559871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:30.183 BaseBdev1 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.183 BaseBdev2_malloc 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.183 [2024-11-26 13:25:18.603282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:30.183 [2024-11-26 13:25:18.603605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.183 [2024-11-26 13:25:18.603670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:30.183 [2024-11-26 13:25:18.603789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.183 [2024-11-26 13:25:18.606113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.183 [2024-11-26 13:25:18.606156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:30.183 BaseBdev2 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.183 spare_malloc 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.183 spare_delay 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.183 [2024-11-26 13:25:18.663945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:30.183 [2024-11-26 13:25:18.664162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.183 [2024-11-26 13:25:18.664195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:30.183 [2024-11-26 13:25:18.664212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.183 [2024-11-26 13:25:18.666664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.183 [2024-11-26 13:25:18.666818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:30.183 spare 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.183 [2024-11-26 13:25:18.672018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.183 [2024-11-26 13:25:18.674143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.183 [2024-11-26 13:25:18.674489] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:30.183 [2024-11-26 13:25:18.674618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:30.183 [2024-11-26 13:25:18.674929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:30.183 [2024-11-26 13:25:18.675122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:30.183 [2024-11-26 13:25:18.675137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:30.183 [2024-11-26 13:25:18.675314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.183 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.183 "name": "raid_bdev1", 00:12:30.183 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:30.183 "strip_size_kb": 0, 00:12:30.183 "state": "online", 00:12:30.183 "raid_level": "raid1", 00:12:30.183 "superblock": true, 00:12:30.184 "num_base_bdevs": 2, 00:12:30.184 "num_base_bdevs_discovered": 2, 00:12:30.184 "num_base_bdevs_operational": 2, 00:12:30.184 "base_bdevs_list": [ 00:12:30.184 { 00:12:30.184 "name": "BaseBdev1", 00:12:30.184 "uuid": "b85c7d4e-4c86-5179-963f-d27e14f44725", 00:12:30.184 "is_configured": true, 00:12:30.184 "data_offset": 2048, 00:12:30.184 "data_size": 63488 00:12:30.184 }, 00:12:30.184 { 00:12:30.184 "name": "BaseBdev2", 00:12:30.184 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:30.184 "is_configured": true, 00:12:30.184 "data_offset": 2048, 00:12:30.184 "data_size": 63488 00:12:30.184 } 00:12:30.184 ] 00:12:30.184 }' 00:12:30.184 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.184 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.749 [2024-11-26 13:25:19.180344] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.749 [2024-11-26 13:25:19.272134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:30.749 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.750 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:30.750 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.750 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.750 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.750 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.750 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:30.750 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.750 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.750 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.750 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.750 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.750 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.750 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.750 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.750 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.008 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.008 "name": "raid_bdev1", 00:12:31.008 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:31.008 "strip_size_kb": 0, 00:12:31.008 "state": "online", 00:12:31.008 "raid_level": "raid1", 00:12:31.008 "superblock": true, 00:12:31.008 "num_base_bdevs": 2, 00:12:31.008 "num_base_bdevs_discovered": 1, 00:12:31.008 "num_base_bdevs_operational": 1, 00:12:31.008 "base_bdevs_list": [ 00:12:31.008 { 00:12:31.008 "name": null, 00:12:31.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.008 "is_configured": false, 00:12:31.008 "data_offset": 0, 00:12:31.008 "data_size": 63488 00:12:31.008 }, 00:12:31.008 { 00:12:31.008 "name": "BaseBdev2", 00:12:31.008 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:31.008 "is_configured": true, 00:12:31.008 "data_offset": 2048, 00:12:31.008 "data_size": 63488 00:12:31.008 } 00:12:31.008 ] 00:12:31.008 }' 00:12:31.008 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.008 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.008 [2024-11-26 13:25:19.399256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:31.008 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:31.008 Zero copy mechanism will not be used. 00:12:31.008 Running I/O for 60 seconds... 00:12:31.267 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:31.267 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.267 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.267 [2024-11-26 13:25:19.749483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:31.267 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.267 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:31.267 [2024-11-26 13:25:19.803130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:31.267 [2024-11-26 13:25:19.805221] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:31.526 [2024-11-26 13:25:19.930342] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:31.526 [2024-11-26 13:25:19.930836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:31.785 [2024-11-26 13:25:20.140426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:31.785 [2024-11-26 13:25:20.140633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:32.044 214.00 IOPS, 642.00 MiB/s [2024-11-26T13:25:20.614Z] [2024-11-26 13:25:20.474561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:32.044 [2024-11-26 13:25:20.589563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:32.044 [2024-11-26 13:25:20.589939] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:32.303 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.303 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.303 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.303 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.303 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.303 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.303 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.303 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.303 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.303 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.303 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.303 "name": "raid_bdev1", 00:12:32.303 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:32.303 "strip_size_kb": 0, 00:12:32.303 "state": "online", 00:12:32.303 "raid_level": "raid1", 00:12:32.303 "superblock": true, 00:12:32.303 "num_base_bdevs": 2, 00:12:32.303 "num_base_bdevs_discovered": 2, 00:12:32.303 "num_base_bdevs_operational": 2, 00:12:32.303 "process": { 00:12:32.303 "type": "rebuild", 00:12:32.303 "target": "spare", 00:12:32.303 "progress": { 00:12:32.303 "blocks": 12288, 00:12:32.303 "percent": 19 00:12:32.303 } 00:12:32.303 }, 00:12:32.303 "base_bdevs_list": [ 00:12:32.303 { 00:12:32.303 "name": "spare", 00:12:32.303 "uuid": "0ad7aa7c-8caf-5987-8360-6c983bb1e515", 00:12:32.303 "is_configured": true, 00:12:32.303 "data_offset": 2048, 00:12:32.303 "data_size": 63488 00:12:32.303 }, 00:12:32.303 { 00:12:32.304 "name": "BaseBdev2", 00:12:32.304 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:32.304 "is_configured": true, 00:12:32.304 "data_offset": 2048, 00:12:32.304 "data_size": 63488 00:12:32.304 } 00:12:32.304 ] 00:12:32.304 }' 00:12:32.304 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.563 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:32.563 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.563 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.563 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:32.563 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.563 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.563 [2024-11-26 13:25:20.959382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:32.563 [2024-11-26 13:25:21.065500] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:32.563 [2024-11-26 13:25:21.072953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.563 [2024-11-26 13:25:21.073118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:32.563 [2024-11-26 13:25:21.073143] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:32.563 [2024-11-26 13:25:21.101146] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:32.563 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.563 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:32.563 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.563 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.563 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.563 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.563 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:32.563 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.563 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.563 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.563 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.563 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.563 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.563 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.563 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.820 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.820 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.820 "name": "raid_bdev1", 00:12:32.820 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:32.820 "strip_size_kb": 0, 00:12:32.820 "state": "online", 00:12:32.820 "raid_level": "raid1", 00:12:32.820 "superblock": true, 00:12:32.820 "num_base_bdevs": 2, 00:12:32.820 "num_base_bdevs_discovered": 1, 00:12:32.820 "num_base_bdevs_operational": 1, 00:12:32.820 "base_bdevs_list": [ 00:12:32.820 { 00:12:32.820 "name": null, 00:12:32.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.820 "is_configured": false, 00:12:32.820 "data_offset": 0, 00:12:32.820 "data_size": 63488 00:12:32.820 }, 00:12:32.820 { 00:12:32.820 "name": "BaseBdev2", 00:12:32.820 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:32.820 "is_configured": true, 00:12:32.820 "data_offset": 2048, 00:12:32.820 "data_size": 63488 00:12:32.820 } 00:12:32.820 ] 00:12:32.820 }' 00:12:32.820 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.820 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.078 183.00 IOPS, 549.00 MiB/s [2024-11-26T13:25:21.648Z] 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:33.078 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.078 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:33.078 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:33.078 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.078 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.078 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.078 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.078 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.337 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.337 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.337 "name": "raid_bdev1", 00:12:33.337 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:33.337 "strip_size_kb": 0, 00:12:33.337 "state": "online", 00:12:33.337 "raid_level": "raid1", 00:12:33.337 "superblock": true, 00:12:33.337 "num_base_bdevs": 2, 00:12:33.337 "num_base_bdevs_discovered": 1, 00:12:33.337 "num_base_bdevs_operational": 1, 00:12:33.337 "base_bdevs_list": [ 00:12:33.337 { 00:12:33.337 "name": null, 00:12:33.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.337 "is_configured": false, 00:12:33.337 "data_offset": 0, 00:12:33.337 "data_size": 63488 00:12:33.337 }, 00:12:33.337 { 00:12:33.337 "name": "BaseBdev2", 00:12:33.337 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:33.337 "is_configured": true, 00:12:33.337 "data_offset": 2048, 00:12:33.337 "data_size": 63488 00:12:33.337 } 00:12:33.337 ] 00:12:33.337 }' 00:12:33.337 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.337 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:33.337 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.337 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:33.337 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:33.337 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.337 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.337 [2024-11-26 13:25:21.779339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.337 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.337 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:33.337 [2024-11-26 13:25:21.825188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:33.337 [2024-11-26 13:25:21.827325] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:33.596 [2024-11-26 13:25:21.929074] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:33.596 [2024-11-26 13:25:21.929399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:33.596 [2024-11-26 13:25:22.155330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:34.164 187.67 IOPS, 563.00 MiB/s [2024-11-26T13:25:22.734Z] [2024-11-26 13:25:22.719478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:34.164 [2024-11-26 13:25:22.719730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.424 [2024-11-26 13:25:22.846854] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.424 "name": "raid_bdev1", 00:12:34.424 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:34.424 "strip_size_kb": 0, 00:12:34.424 "state": "online", 00:12:34.424 "raid_level": "raid1", 00:12:34.424 "superblock": true, 00:12:34.424 "num_base_bdevs": 2, 00:12:34.424 "num_base_bdevs_discovered": 2, 00:12:34.424 "num_base_bdevs_operational": 2, 00:12:34.424 "process": { 00:12:34.424 "type": "rebuild", 00:12:34.424 "target": "spare", 00:12:34.424 "progress": { 00:12:34.424 "blocks": 14336, 00:12:34.424 "percent": 22 00:12:34.424 } 00:12:34.424 }, 00:12:34.424 "base_bdevs_list": [ 00:12:34.424 { 00:12:34.424 "name": "spare", 00:12:34.424 "uuid": "0ad7aa7c-8caf-5987-8360-6c983bb1e515", 00:12:34.424 "is_configured": true, 00:12:34.424 "data_offset": 2048, 00:12:34.424 "data_size": 63488 00:12:34.424 }, 00:12:34.424 { 00:12:34.424 "name": "BaseBdev2", 00:12:34.424 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:34.424 "is_configured": true, 00:12:34.424 "data_offset": 2048, 00:12:34.424 "data_size": 63488 00:12:34.424 } 00:12:34.424 ] 00:12:34.424 }' 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:34.424 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=424 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.424 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.683 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.683 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.683 "name": "raid_bdev1", 00:12:34.683 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:34.683 "strip_size_kb": 0, 00:12:34.683 "state": "online", 00:12:34.683 "raid_level": "raid1", 00:12:34.683 "superblock": true, 00:12:34.683 "num_base_bdevs": 2, 00:12:34.683 "num_base_bdevs_discovered": 2, 00:12:34.683 "num_base_bdevs_operational": 2, 00:12:34.683 "process": { 00:12:34.683 "type": "rebuild", 00:12:34.683 "target": "spare", 00:12:34.683 "progress": { 00:12:34.683 "blocks": 18432, 00:12:34.683 "percent": 29 00:12:34.683 } 00:12:34.683 }, 00:12:34.683 "base_bdevs_list": [ 00:12:34.683 { 00:12:34.683 "name": "spare", 00:12:34.683 "uuid": "0ad7aa7c-8caf-5987-8360-6c983bb1e515", 00:12:34.683 "is_configured": true, 00:12:34.683 "data_offset": 2048, 00:12:34.683 "data_size": 63488 00:12:34.683 }, 00:12:34.683 { 00:12:34.683 "name": "BaseBdev2", 00:12:34.683 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:34.683 "is_configured": true, 00:12:34.683 "data_offset": 2048, 00:12:34.683 "data_size": 63488 00:12:34.683 } 00:12:34.683 ] 00:12:34.683 }' 00:12:34.683 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.683 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.683 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.683 [2024-11-26 13:25:23.087201] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:34.683 [2024-11-26 13:25:23.087754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:34.683 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.683 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:35.201 159.00 IOPS, 477.00 MiB/s [2024-11-26T13:25:23.771Z] [2024-11-26 13:25:23.646899] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:35.460 [2024-11-26 13:25:23.888401] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:35.720 [2024-11-26 13:25:24.109256] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:35.720 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:35.720 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.720 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.720 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.720 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.720 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.720 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.720 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.720 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.720 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.720 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.720 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.720 "name": "raid_bdev1", 00:12:35.720 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:35.720 "strip_size_kb": 0, 00:12:35.720 "state": "online", 00:12:35.720 "raid_level": "raid1", 00:12:35.720 "superblock": true, 00:12:35.720 "num_base_bdevs": 2, 00:12:35.720 "num_base_bdevs_discovered": 2, 00:12:35.720 "num_base_bdevs_operational": 2, 00:12:35.720 "process": { 00:12:35.720 "type": "rebuild", 00:12:35.720 "target": "spare", 00:12:35.720 "progress": { 00:12:35.720 "blocks": 34816, 00:12:35.720 "percent": 54 00:12:35.720 } 00:12:35.720 }, 00:12:35.720 "base_bdevs_list": [ 00:12:35.720 { 00:12:35.720 "name": "spare", 00:12:35.720 "uuid": "0ad7aa7c-8caf-5987-8360-6c983bb1e515", 00:12:35.720 "is_configured": true, 00:12:35.720 "data_offset": 2048, 00:12:35.720 "data_size": 63488 00:12:35.720 }, 00:12:35.720 { 00:12:35.721 "name": "BaseBdev2", 00:12:35.721 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:35.721 "is_configured": true, 00:12:35.721 "data_offset": 2048, 00:12:35.721 "data_size": 63488 00:12:35.721 } 00:12:35.721 ] 00:12:35.721 }' 00:12:35.721 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.721 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.721 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.979 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.979 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:35.979 135.60 IOPS, 406.80 MiB/s [2024-11-26T13:25:24.549Z] [2024-11-26 13:25:24.424460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:35.979 [2024-11-26 13:25:24.425112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:36.916 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:36.916 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.916 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.916 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.916 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.916 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.916 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.916 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.916 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.916 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.916 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.916 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.916 "name": "raid_bdev1", 00:12:36.916 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:36.916 "strip_size_kb": 0, 00:12:36.916 "state": "online", 00:12:36.916 "raid_level": "raid1", 00:12:36.916 "superblock": true, 00:12:36.916 "num_base_bdevs": 2, 00:12:36.916 "num_base_bdevs_discovered": 2, 00:12:36.916 "num_base_bdevs_operational": 2, 00:12:36.916 "process": { 00:12:36.916 "type": "rebuild", 00:12:36.916 "target": "spare", 00:12:36.916 "progress": { 00:12:36.916 "blocks": 53248, 00:12:36.917 "percent": 83 00:12:36.917 } 00:12:36.917 }, 00:12:36.917 "base_bdevs_list": [ 00:12:36.917 { 00:12:36.917 "name": "spare", 00:12:36.917 "uuid": "0ad7aa7c-8caf-5987-8360-6c983bb1e515", 00:12:36.917 "is_configured": true, 00:12:36.917 "data_offset": 2048, 00:12:36.917 "data_size": 63488 00:12:36.917 }, 00:12:36.917 { 00:12:36.917 "name": "BaseBdev2", 00:12:36.917 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:36.917 "is_configured": true, 00:12:36.917 "data_offset": 2048, 00:12:36.917 "data_size": 63488 00:12:36.917 } 00:12:36.917 ] 00:12:36.917 }' 00:12:36.917 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.917 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.917 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.917 120.17 IOPS, 360.50 MiB/s [2024-11-26T13:25:25.487Z] 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.917 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:36.917 [2024-11-26 13:25:25.474593] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:37.175 [2024-11-26 13:25:25.582295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:37.175 [2024-11-26 13:25:25.582589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:37.434 [2024-11-26 13:25:25.819712] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:37.434 [2024-11-26 13:25:25.925501] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:37.434 [2024-11-26 13:25:25.927870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.003 108.14 IOPS, 324.43 MiB/s [2024-11-26T13:25:26.573Z] 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:38.003 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.003 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.003 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.003 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.003 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.003 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.003 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.003 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.003 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.003 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.003 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.003 "name": "raid_bdev1", 00:12:38.003 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:38.003 "strip_size_kb": 0, 00:12:38.003 "state": "online", 00:12:38.003 "raid_level": "raid1", 00:12:38.003 "superblock": true, 00:12:38.003 "num_base_bdevs": 2, 00:12:38.003 "num_base_bdevs_discovered": 2, 00:12:38.003 "num_base_bdevs_operational": 2, 00:12:38.003 "base_bdevs_list": [ 00:12:38.003 { 00:12:38.003 "name": "spare", 00:12:38.003 "uuid": "0ad7aa7c-8caf-5987-8360-6c983bb1e515", 00:12:38.003 "is_configured": true, 00:12:38.003 "data_offset": 2048, 00:12:38.003 "data_size": 63488 00:12:38.003 }, 00:12:38.003 { 00:12:38.003 "name": "BaseBdev2", 00:12:38.003 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:38.003 "is_configured": true, 00:12:38.003 "data_offset": 2048, 00:12:38.003 "data_size": 63488 00:12:38.003 } 00:12:38.003 ] 00:12:38.003 }' 00:12:38.003 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.263 "name": "raid_bdev1", 00:12:38.263 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:38.263 "strip_size_kb": 0, 00:12:38.263 "state": "online", 00:12:38.263 "raid_level": "raid1", 00:12:38.263 "superblock": true, 00:12:38.263 "num_base_bdevs": 2, 00:12:38.263 "num_base_bdevs_discovered": 2, 00:12:38.263 "num_base_bdevs_operational": 2, 00:12:38.263 "base_bdevs_list": [ 00:12:38.263 { 00:12:38.263 "name": "spare", 00:12:38.263 "uuid": "0ad7aa7c-8caf-5987-8360-6c983bb1e515", 00:12:38.263 "is_configured": true, 00:12:38.263 "data_offset": 2048, 00:12:38.263 "data_size": 63488 00:12:38.263 }, 00:12:38.263 { 00:12:38.263 "name": "BaseBdev2", 00:12:38.263 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:38.263 "is_configured": true, 00:12:38.263 "data_offset": 2048, 00:12:38.263 "data_size": 63488 00:12:38.263 } 00:12:38.263 ] 00:12:38.263 }' 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.263 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.522 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.522 "name": "raid_bdev1", 00:12:38.522 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:38.522 "strip_size_kb": 0, 00:12:38.522 "state": "online", 00:12:38.522 "raid_level": "raid1", 00:12:38.522 "superblock": true, 00:12:38.522 "num_base_bdevs": 2, 00:12:38.522 "num_base_bdevs_discovered": 2, 00:12:38.522 "num_base_bdevs_operational": 2, 00:12:38.522 "base_bdevs_list": [ 00:12:38.522 { 00:12:38.522 "name": "spare", 00:12:38.522 "uuid": "0ad7aa7c-8caf-5987-8360-6c983bb1e515", 00:12:38.522 "is_configured": true, 00:12:38.522 "data_offset": 2048, 00:12:38.522 "data_size": 63488 00:12:38.522 }, 00:12:38.522 { 00:12:38.522 "name": "BaseBdev2", 00:12:38.522 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:38.522 "is_configured": true, 00:12:38.522 "data_offset": 2048, 00:12:38.522 "data_size": 63488 00:12:38.522 } 00:12:38.522 ] 00:12:38.522 }' 00:12:38.522 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.522 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.782 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:38.782 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.782 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.782 [2024-11-26 13:25:27.292001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:38.782 [2024-11-26 13:25:27.292222] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.782 00:12:38.782 Latency(us) 00:12:38.782 [2024-11-26T13:25:27.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.782 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:38.782 raid_bdev1 : 7.92 98.88 296.65 0.00 0.00 14574.95 273.69 108670.60 00:12:38.782 [2024-11-26T13:25:27.352Z] =================================================================================================================== 00:12:38.782 [2024-11-26T13:25:27.352Z] Total : 98.88 296.65 0.00 0.00 14574.95 273.69 108670.60 00:12:38.782 [2024-11-26 13:25:27.335099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.782 [2024-11-26 13:25:27.335149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.782 [2024-11-26 13:25:27.335253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.782 [2024-11-26 13:25:27.335274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:38.782 { 00:12:38.782 "results": [ 00:12:38.782 { 00:12:38.782 "job": "raid_bdev1", 00:12:38.782 "core_mask": "0x1", 00:12:38.782 "workload": "randrw", 00:12:38.782 "percentage": 50, 00:12:38.782 "status": "finished", 00:12:38.782 "queue_depth": 2, 00:12:38.782 "io_size": 3145728, 00:12:38.782 "runtime": 7.91851, 00:12:38.782 "iops": 98.88223920914415, 00:12:38.782 "mibps": 296.64671762743245, 00:12:38.782 "io_failed": 0, 00:12:38.782 "io_timeout": 0, 00:12:38.782 "avg_latency_us": 14574.945222338325, 00:12:38.782 "min_latency_us": 273.6872727272727, 00:12:38.782 "max_latency_us": 108670.60363636364 00:12:38.782 } 00:12:38.782 ], 00:12:38.782 "core_count": 1 00:12:38.782 } 00:12:38.782 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.782 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.782 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.782 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:38.782 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.041 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.041 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:39.041 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:39.041 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:39.041 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:39.041 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:39.041 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:39.041 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:39.041 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:39.041 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:39.041 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:39.041 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:39.041 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.041 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:39.300 /dev/nbd0 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.300 1+0 records in 00:12:39.300 1+0 records out 00:12:39.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398149 s, 10.3 MB/s 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:39.300 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:39.301 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:39.301 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:39.301 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:39.301 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.301 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:39.560 /dev/nbd1 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.560 1+0 records in 00:12:39.560 1+0 records out 00:12:39.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272976 s, 15.0 MB/s 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.560 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:39.820 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:39.820 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:39.820 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:39.820 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:39.820 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:39.820 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.820 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:40.078 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:40.079 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:40.079 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:40.079 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:40.079 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.079 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:40.079 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:40.079 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.079 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:40.079 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:40.079 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:40.079 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:40.079 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:40.079 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:40.079 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.338 [2024-11-26 13:25:28.767368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:40.338 [2024-11-26 13:25:28.767427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.338 [2024-11-26 13:25:28.767453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:40.338 [2024-11-26 13:25:28.767468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.338 [2024-11-26 13:25:28.769761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.338 [2024-11-26 13:25:28.769976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:40.338 [2024-11-26 13:25:28.770083] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:40.338 [2024-11-26 13:25:28.770147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:40.338 [2024-11-26 13:25:28.770359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.338 spare 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.338 [2024-11-26 13:25:28.870472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:40.338 [2024-11-26 13:25:28.870497] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:40.338 [2024-11-26 13:25:28.870778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:40.338 [2024-11-26 13:25:28.870952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:40.338 [2024-11-26 13:25:28.870981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:40.338 [2024-11-26 13:25:28.871133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.338 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.598 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.598 "name": "raid_bdev1", 00:12:40.598 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:40.598 "strip_size_kb": 0, 00:12:40.598 "state": "online", 00:12:40.598 "raid_level": "raid1", 00:12:40.598 "superblock": true, 00:12:40.598 "num_base_bdevs": 2, 00:12:40.598 "num_base_bdevs_discovered": 2, 00:12:40.598 "num_base_bdevs_operational": 2, 00:12:40.598 "base_bdevs_list": [ 00:12:40.598 { 00:12:40.598 "name": "spare", 00:12:40.598 "uuid": "0ad7aa7c-8caf-5987-8360-6c983bb1e515", 00:12:40.598 "is_configured": true, 00:12:40.598 "data_offset": 2048, 00:12:40.598 "data_size": 63488 00:12:40.598 }, 00:12:40.598 { 00:12:40.598 "name": "BaseBdev2", 00:12:40.598 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:40.598 "is_configured": true, 00:12:40.598 "data_offset": 2048, 00:12:40.598 "data_size": 63488 00:12:40.598 } 00:12:40.598 ] 00:12:40.598 }' 00:12:40.598 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.598 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.855 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:40.855 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.855 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:40.855 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:40.855 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.855 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.855 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.855 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.855 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.855 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.113 "name": "raid_bdev1", 00:12:41.113 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:41.113 "strip_size_kb": 0, 00:12:41.113 "state": "online", 00:12:41.113 "raid_level": "raid1", 00:12:41.113 "superblock": true, 00:12:41.113 "num_base_bdevs": 2, 00:12:41.113 "num_base_bdevs_discovered": 2, 00:12:41.113 "num_base_bdevs_operational": 2, 00:12:41.113 "base_bdevs_list": [ 00:12:41.113 { 00:12:41.113 "name": "spare", 00:12:41.113 "uuid": "0ad7aa7c-8caf-5987-8360-6c983bb1e515", 00:12:41.113 "is_configured": true, 00:12:41.113 "data_offset": 2048, 00:12:41.113 "data_size": 63488 00:12:41.113 }, 00:12:41.113 { 00:12:41.113 "name": "BaseBdev2", 00:12:41.113 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:41.113 "is_configured": true, 00:12:41.113 "data_offset": 2048, 00:12:41.113 "data_size": 63488 00:12:41.113 } 00:12:41.113 ] 00:12:41.113 }' 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.113 [2024-11-26 13:25:29.595668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.113 "name": "raid_bdev1", 00:12:41.113 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:41.113 "strip_size_kb": 0, 00:12:41.113 "state": "online", 00:12:41.113 "raid_level": "raid1", 00:12:41.113 "superblock": true, 00:12:41.113 "num_base_bdevs": 2, 00:12:41.113 "num_base_bdevs_discovered": 1, 00:12:41.113 "num_base_bdevs_operational": 1, 00:12:41.113 "base_bdevs_list": [ 00:12:41.113 { 00:12:41.113 "name": null, 00:12:41.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.113 "is_configured": false, 00:12:41.113 "data_offset": 0, 00:12:41.113 "data_size": 63488 00:12:41.113 }, 00:12:41.113 { 00:12:41.113 "name": "BaseBdev2", 00:12:41.113 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:41.113 "is_configured": true, 00:12:41.113 "data_offset": 2048, 00:12:41.113 "data_size": 63488 00:12:41.113 } 00:12:41.113 ] 00:12:41.113 }' 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.113 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.679 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:41.679 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.679 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.679 [2024-11-26 13:25:30.115846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:41.679 [2024-11-26 13:25:30.116132] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:41.680 [2024-11-26 13:25:30.116158] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:41.680 [2024-11-26 13:25:30.116201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:41.680 [2024-11-26 13:25:30.129169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:12:41.680 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.680 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:41.680 [2024-11-26 13:25:30.131188] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:42.618 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.618 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.618 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.618 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.618 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.618 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.619 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.619 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.619 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.619 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.878 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.878 "name": "raid_bdev1", 00:12:42.878 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:42.878 "strip_size_kb": 0, 00:12:42.878 "state": "online", 00:12:42.878 "raid_level": "raid1", 00:12:42.878 "superblock": true, 00:12:42.878 "num_base_bdevs": 2, 00:12:42.878 "num_base_bdevs_discovered": 2, 00:12:42.878 "num_base_bdevs_operational": 2, 00:12:42.878 "process": { 00:12:42.878 "type": "rebuild", 00:12:42.878 "target": "spare", 00:12:42.878 "progress": { 00:12:42.878 "blocks": 20480, 00:12:42.878 "percent": 32 00:12:42.878 } 00:12:42.878 }, 00:12:42.878 "base_bdevs_list": [ 00:12:42.878 { 00:12:42.878 "name": "spare", 00:12:42.878 "uuid": "0ad7aa7c-8caf-5987-8360-6c983bb1e515", 00:12:42.878 "is_configured": true, 00:12:42.878 "data_offset": 2048, 00:12:42.878 "data_size": 63488 00:12:42.878 }, 00:12:42.878 { 00:12:42.878 "name": "BaseBdev2", 00:12:42.878 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:42.878 "is_configured": true, 00:12:42.878 "data_offset": 2048, 00:12:42.878 "data_size": 63488 00:12:42.878 } 00:12:42.878 ] 00:12:42.878 }' 00:12:42.878 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.878 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.878 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.878 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.878 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:42.878 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.878 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.878 [2024-11-26 13:25:31.296963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:42.878 [2024-11-26 13:25:31.338496] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:42.878 [2024-11-26 13:25:31.338687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.878 [2024-11-26 13:25:31.338718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:42.878 [2024-11-26 13:25:31.338743] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:42.878 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.879 "name": "raid_bdev1", 00:12:42.879 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:42.879 "strip_size_kb": 0, 00:12:42.879 "state": "online", 00:12:42.879 "raid_level": "raid1", 00:12:42.879 "superblock": true, 00:12:42.879 "num_base_bdevs": 2, 00:12:42.879 "num_base_bdevs_discovered": 1, 00:12:42.879 "num_base_bdevs_operational": 1, 00:12:42.879 "base_bdevs_list": [ 00:12:42.879 { 00:12:42.879 "name": null, 00:12:42.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.879 "is_configured": false, 00:12:42.879 "data_offset": 0, 00:12:42.879 "data_size": 63488 00:12:42.879 }, 00:12:42.879 { 00:12:42.879 "name": "BaseBdev2", 00:12:42.879 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:42.879 "is_configured": true, 00:12:42.879 "data_offset": 2048, 00:12:42.879 "data_size": 63488 00:12:42.879 } 00:12:42.879 ] 00:12:42.879 }' 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.879 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.448 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:43.448 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.448 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.448 [2024-11-26 13:25:31.884699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:43.448 [2024-11-26 13:25:31.884902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.448 [2024-11-26 13:25:31.884945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:43.448 [2024-11-26 13:25:31.884959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.448 [2024-11-26 13:25:31.885479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.448 [2024-11-26 13:25:31.885502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:43.448 [2024-11-26 13:25:31.885594] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:43.448 [2024-11-26 13:25:31.885610] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:43.448 [2024-11-26 13:25:31.885626] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:43.448 [2024-11-26 13:25:31.885650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:43.448 spare 00:12:43.448 [2024-11-26 13:25:31.896436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:12:43.448 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.448 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:43.448 [2024-11-26 13:25:31.898445] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:44.386 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.386 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.386 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.386 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.386 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.386 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.386 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.386 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.386 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.386 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.645 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.645 "name": "raid_bdev1", 00:12:44.645 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:44.645 "strip_size_kb": 0, 00:12:44.645 "state": "online", 00:12:44.646 "raid_level": "raid1", 00:12:44.646 "superblock": true, 00:12:44.646 "num_base_bdevs": 2, 00:12:44.646 "num_base_bdevs_discovered": 2, 00:12:44.646 "num_base_bdevs_operational": 2, 00:12:44.646 "process": { 00:12:44.646 "type": "rebuild", 00:12:44.646 "target": "spare", 00:12:44.646 "progress": { 00:12:44.646 "blocks": 20480, 00:12:44.646 "percent": 32 00:12:44.646 } 00:12:44.646 }, 00:12:44.646 "base_bdevs_list": [ 00:12:44.646 { 00:12:44.646 "name": "spare", 00:12:44.646 "uuid": "0ad7aa7c-8caf-5987-8360-6c983bb1e515", 00:12:44.646 "is_configured": true, 00:12:44.646 "data_offset": 2048, 00:12:44.646 "data_size": 63488 00:12:44.646 }, 00:12:44.646 { 00:12:44.646 "name": "BaseBdev2", 00:12:44.646 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:44.646 "is_configured": true, 00:12:44.646 "data_offset": 2048, 00:12:44.646 "data_size": 63488 00:12:44.646 } 00:12:44.646 ] 00:12:44.646 }' 00:12:44.646 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.646 [2024-11-26 13:25:33.068553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.646 [2024-11-26 13:25:33.104930] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:44.646 [2024-11-26 13:25:33.105131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.646 [2024-11-26 13:25:33.105158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.646 [2024-11-26 13:25:33.105173] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.646 "name": "raid_bdev1", 00:12:44.646 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:44.646 "strip_size_kb": 0, 00:12:44.646 "state": "online", 00:12:44.646 "raid_level": "raid1", 00:12:44.646 "superblock": true, 00:12:44.646 "num_base_bdevs": 2, 00:12:44.646 "num_base_bdevs_discovered": 1, 00:12:44.646 "num_base_bdevs_operational": 1, 00:12:44.646 "base_bdevs_list": [ 00:12:44.646 { 00:12:44.646 "name": null, 00:12:44.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.646 "is_configured": false, 00:12:44.646 "data_offset": 0, 00:12:44.646 "data_size": 63488 00:12:44.646 }, 00:12:44.646 { 00:12:44.646 "name": "BaseBdev2", 00:12:44.646 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:44.646 "is_configured": true, 00:12:44.646 "data_offset": 2048, 00:12:44.646 "data_size": 63488 00:12:44.646 } 00:12:44.646 ] 00:12:44.646 }' 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.646 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.214 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.214 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.214 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.214 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.214 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.214 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.214 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.214 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.214 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.214 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.214 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.214 "name": "raid_bdev1", 00:12:45.214 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:45.214 "strip_size_kb": 0, 00:12:45.214 "state": "online", 00:12:45.214 "raid_level": "raid1", 00:12:45.214 "superblock": true, 00:12:45.214 "num_base_bdevs": 2, 00:12:45.214 "num_base_bdevs_discovered": 1, 00:12:45.214 "num_base_bdevs_operational": 1, 00:12:45.214 "base_bdevs_list": [ 00:12:45.214 { 00:12:45.214 "name": null, 00:12:45.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.214 "is_configured": false, 00:12:45.214 "data_offset": 0, 00:12:45.214 "data_size": 63488 00:12:45.214 }, 00:12:45.214 { 00:12:45.214 "name": "BaseBdev2", 00:12:45.214 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:45.214 "is_configured": true, 00:12:45.214 "data_offset": 2048, 00:12:45.214 "data_size": 63488 00:12:45.214 } 00:12:45.214 ] 00:12:45.214 }' 00:12:45.214 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.214 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.214 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.474 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.474 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:45.474 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.474 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.474 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.474 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:45.474 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.474 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.474 [2024-11-26 13:25:33.810991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:45.474 [2024-11-26 13:25:33.811043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.474 [2024-11-26 13:25:33.811068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:45.474 [2024-11-26 13:25:33.811085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.474 [2024-11-26 13:25:33.811574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.474 [2024-11-26 13:25:33.811609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:45.474 [2024-11-26 13:25:33.811684] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:45.474 [2024-11-26 13:25:33.811709] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:45.474 [2024-11-26 13:25:33.811719] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:45.474 [2024-11-26 13:25:33.811731] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:45.474 BaseBdev1 00:12:45.474 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.474 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.412 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.412 "name": "raid_bdev1", 00:12:46.412 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:46.412 "strip_size_kb": 0, 00:12:46.412 "state": "online", 00:12:46.412 "raid_level": "raid1", 00:12:46.412 "superblock": true, 00:12:46.412 "num_base_bdevs": 2, 00:12:46.412 "num_base_bdevs_discovered": 1, 00:12:46.412 "num_base_bdevs_operational": 1, 00:12:46.412 "base_bdevs_list": [ 00:12:46.412 { 00:12:46.412 "name": null, 00:12:46.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.412 "is_configured": false, 00:12:46.412 "data_offset": 0, 00:12:46.412 "data_size": 63488 00:12:46.412 }, 00:12:46.412 { 00:12:46.412 "name": "BaseBdev2", 00:12:46.412 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:46.412 "is_configured": true, 00:12:46.412 "data_offset": 2048, 00:12:46.412 "data_size": 63488 00:12:46.412 } 00:12:46.412 ] 00:12:46.413 }' 00:12:46.413 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.413 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.981 "name": "raid_bdev1", 00:12:46.981 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:46.981 "strip_size_kb": 0, 00:12:46.981 "state": "online", 00:12:46.981 "raid_level": "raid1", 00:12:46.981 "superblock": true, 00:12:46.981 "num_base_bdevs": 2, 00:12:46.981 "num_base_bdevs_discovered": 1, 00:12:46.981 "num_base_bdevs_operational": 1, 00:12:46.981 "base_bdevs_list": [ 00:12:46.981 { 00:12:46.981 "name": null, 00:12:46.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.981 "is_configured": false, 00:12:46.981 "data_offset": 0, 00:12:46.981 "data_size": 63488 00:12:46.981 }, 00:12:46.981 { 00:12:46.981 "name": "BaseBdev2", 00:12:46.981 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:46.981 "is_configured": true, 00:12:46.981 "data_offset": 2048, 00:12:46.981 "data_size": 63488 00:12:46.981 } 00:12:46.981 ] 00:12:46.981 }' 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.981 [2024-11-26 13:25:35.503559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:46.981 [2024-11-26 13:25:35.503817] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:46.981 [2024-11-26 13:25:35.503841] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:46.981 request: 00:12:46.981 { 00:12:46.981 "base_bdev": "BaseBdev1", 00:12:46.981 "raid_bdev": "raid_bdev1", 00:12:46.981 "method": "bdev_raid_add_base_bdev", 00:12:46.981 "req_id": 1 00:12:46.981 } 00:12:46.981 Got JSON-RPC error response 00:12:46.981 response: 00:12:46.981 { 00:12:46.981 "code": -22, 00:12:46.981 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:46.981 } 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:46.981 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:47.990 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:47.990 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.990 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.281 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.281 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.281 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:48.281 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.281 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.281 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.281 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.281 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.281 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.281 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.281 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.281 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.281 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.281 "name": "raid_bdev1", 00:12:48.281 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:48.281 "strip_size_kb": 0, 00:12:48.281 "state": "online", 00:12:48.281 "raid_level": "raid1", 00:12:48.281 "superblock": true, 00:12:48.281 "num_base_bdevs": 2, 00:12:48.281 "num_base_bdevs_discovered": 1, 00:12:48.281 "num_base_bdevs_operational": 1, 00:12:48.281 "base_bdevs_list": [ 00:12:48.281 { 00:12:48.281 "name": null, 00:12:48.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.281 "is_configured": false, 00:12:48.281 "data_offset": 0, 00:12:48.281 "data_size": 63488 00:12:48.281 }, 00:12:48.281 { 00:12:48.281 "name": "BaseBdev2", 00:12:48.281 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:48.281 "is_configured": true, 00:12:48.281 "data_offset": 2048, 00:12:48.281 "data_size": 63488 00:12:48.281 } 00:12:48.281 ] 00:12:48.281 }' 00:12:48.281 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.281 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.540 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:48.540 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.540 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:48.540 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:48.540 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.540 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.540 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.540 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.540 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.540 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.540 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.540 "name": "raid_bdev1", 00:12:48.540 "uuid": "427a6ff3-78df-4eb6-8a2b-72e28740b1ca", 00:12:48.541 "strip_size_kb": 0, 00:12:48.541 "state": "online", 00:12:48.541 "raid_level": "raid1", 00:12:48.541 "superblock": true, 00:12:48.541 "num_base_bdevs": 2, 00:12:48.541 "num_base_bdevs_discovered": 1, 00:12:48.541 "num_base_bdevs_operational": 1, 00:12:48.541 "base_bdevs_list": [ 00:12:48.541 { 00:12:48.541 "name": null, 00:12:48.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.541 "is_configured": false, 00:12:48.541 "data_offset": 0, 00:12:48.541 "data_size": 63488 00:12:48.541 }, 00:12:48.541 { 00:12:48.541 "name": "BaseBdev2", 00:12:48.541 "uuid": "c6eba967-be77-531f-80d5-fc7db65440a1", 00:12:48.541 "is_configured": true, 00:12:48.541 "data_offset": 2048, 00:12:48.541 "data_size": 63488 00:12:48.541 } 00:12:48.541 ] 00:12:48.541 }' 00:12:48.541 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.800 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:48.800 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.800 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:48.800 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76432 00:12:48.800 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76432 ']' 00:12:48.800 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76432 00:12:48.800 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:12:48.800 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.800 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76432 00:12:48.800 killing process with pid 76432 00:12:48.800 Received shutdown signal, test time was about 17.795631 seconds 00:12:48.800 00:12:48.800 Latency(us) 00:12:48.800 [2024-11-26T13:25:37.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.800 [2024-11-26T13:25:37.370Z] =================================================================================================================== 00:12:48.800 [2024-11-26T13:25:37.370Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:48.800 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.800 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.800 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76432' 00:12:48.800 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76432 00:12:48.800 [2024-11-26 13:25:37.197039] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:48.800 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76432 00:12:48.800 [2024-11-26 13:25:37.197137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.800 [2024-11-26 13:25:37.197193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:48.800 [2024-11-26 13:25:37.197207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:48.800 [2024-11-26 13:25:37.354355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:49.738 00:12:49.738 real 0m20.713s 00:12:49.738 user 0m28.304s 00:12:49.738 sys 0m1.887s 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.738 ************************************ 00:12:49.738 END TEST raid_rebuild_test_sb_io 00:12:49.738 ************************************ 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.738 13:25:38 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:49.738 13:25:38 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:49.738 13:25:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:49.738 13:25:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.738 13:25:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:49.738 ************************************ 00:12:49.738 START TEST raid_rebuild_test 00:12:49.738 ************************************ 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:49.738 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77127 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77127 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77127 ']' 00:12:49.739 13:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.998 13:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.998 13:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.998 13:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.998 13:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.998 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:49.998 Zero copy mechanism will not be used. 00:12:49.998 [2024-11-26 13:25:38.418969] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:12:49.998 [2024-11-26 13:25:38.419170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77127 ] 00:12:50.257 [2024-11-26 13:25:38.601625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.257 [2024-11-26 13:25:38.702272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.517 [2024-11-26 13:25:38.871564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.517 [2024-11-26 13:25:38.871607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.778 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.778 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:50.778 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:50.778 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:50.778 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.778 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.778 BaseBdev1_malloc 00:12:50.778 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.778 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:50.778 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.778 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.778 [2024-11-26 13:25:39.324222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:50.778 [2024-11-26 13:25:39.324317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.778 [2024-11-26 13:25:39.324346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:50.778 [2024-11-26 13:25:39.324361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.778 [2024-11-26 13:25:39.326595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.778 [2024-11-26 13:25:39.326641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:50.778 BaseBdev1 00:12:50.778 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.778 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:50.778 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:50.778 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.778 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 BaseBdev2_malloc 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 [2024-11-26 13:25:39.369913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:51.036 [2024-11-26 13:25:39.370258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.036 [2024-11-26 13:25:39.370325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:51.036 [2024-11-26 13:25:39.370349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.036 [2024-11-26 13:25:39.372658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.036 [2024-11-26 13:25:39.372703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:51.036 BaseBdev2 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 BaseBdev3_malloc 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 [2024-11-26 13:25:39.422904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:51.036 [2024-11-26 13:25:39.423142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.036 [2024-11-26 13:25:39.423208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:51.036 [2024-11-26 13:25:39.423331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.036 [2024-11-26 13:25:39.425658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.036 [2024-11-26 13:25:39.425805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:51.036 BaseBdev3 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 BaseBdev4_malloc 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 [2024-11-26 13:25:39.468686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:51.036 [2024-11-26 13:25:39.468881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.036 [2024-11-26 13:25:39.468946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:51.036 [2024-11-26 13:25:39.469053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.036 [2024-11-26 13:25:39.471377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.036 [2024-11-26 13:25:39.471523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:51.036 BaseBdev4 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 spare_malloc 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 spare_delay 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.036 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 [2024-11-26 13:25:39.518271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:51.036 [2024-11-26 13:25:39.518469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.036 [2024-11-26 13:25:39.518531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:51.036 [2024-11-26 13:25:39.518679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.036 [2024-11-26 13:25:39.521137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.037 [2024-11-26 13:25:39.521305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:51.037 spare 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.037 [2024-11-26 13:25:39.526320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.037 [2024-11-26 13:25:39.528495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.037 [2024-11-26 13:25:39.528706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.037 [2024-11-26 13:25:39.528822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:51.037 [2024-11-26 13:25:39.529033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:51.037 [2024-11-26 13:25:39.529061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:51.037 [2024-11-26 13:25:39.529354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:51.037 [2024-11-26 13:25:39.529556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:51.037 [2024-11-26 13:25:39.529573] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:51.037 [2024-11-26 13:25:39.529735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.037 "name": "raid_bdev1", 00:12:51.037 "uuid": "87de9874-6b64-4e9f-9886-0160c4ad8424", 00:12:51.037 "strip_size_kb": 0, 00:12:51.037 "state": "online", 00:12:51.037 "raid_level": "raid1", 00:12:51.037 "superblock": false, 00:12:51.037 "num_base_bdevs": 4, 00:12:51.037 "num_base_bdevs_discovered": 4, 00:12:51.037 "num_base_bdevs_operational": 4, 00:12:51.037 "base_bdevs_list": [ 00:12:51.037 { 00:12:51.037 "name": "BaseBdev1", 00:12:51.037 "uuid": "1c4a3eb0-54ab-54e8-8e36-723beb11fe93", 00:12:51.037 "is_configured": true, 00:12:51.037 "data_offset": 0, 00:12:51.037 "data_size": 65536 00:12:51.037 }, 00:12:51.037 { 00:12:51.037 "name": "BaseBdev2", 00:12:51.037 "uuid": "39c275e6-b976-5caf-9fcc-d7b841848585", 00:12:51.037 "is_configured": true, 00:12:51.037 "data_offset": 0, 00:12:51.037 "data_size": 65536 00:12:51.037 }, 00:12:51.037 { 00:12:51.037 "name": "BaseBdev3", 00:12:51.037 "uuid": "600343cb-9a31-5e33-8eca-062084b20c8c", 00:12:51.037 "is_configured": true, 00:12:51.037 "data_offset": 0, 00:12:51.037 "data_size": 65536 00:12:51.037 }, 00:12:51.037 { 00:12:51.037 "name": "BaseBdev4", 00:12:51.037 "uuid": "7b4c187d-cdf3-5f21-b732-4e4e7a476e2a", 00:12:51.037 "is_configured": true, 00:12:51.037 "data_offset": 0, 00:12:51.037 "data_size": 65536 00:12:51.037 } 00:12:51.037 ] 00:12:51.037 }' 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.037 13:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.605 [2024-11-26 13:25:40.042697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.605 13:25:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:51.864 [2024-11-26 13:25:40.394482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:51.864 /dev/nbd0 00:12:52.123 13:25:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:52.123 13:25:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:52.123 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:52.123 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:52.123 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:52.123 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:52.123 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:52.123 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:52.123 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:52.123 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:52.123 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.123 1+0 records in 00:12:52.123 1+0 records out 00:12:52.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287897 s, 14.2 MB/s 00:12:52.123 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.124 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:52.124 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.124 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:52.124 13:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:52.124 13:25:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.124 13:25:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:52.124 13:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:52.124 13:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:52.124 13:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:00.245 65536+0 records in 00:13:00.245 65536+0 records out 00:13:00.245 33554432 bytes (34 MB, 32 MiB) copied, 6.84562 s, 4.9 MB/s 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:00.245 [2024-11-26 13:25:47.546060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.245 [2024-11-26 13:25:47.574109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.245 "name": "raid_bdev1", 00:13:00.245 "uuid": "87de9874-6b64-4e9f-9886-0160c4ad8424", 00:13:00.245 "strip_size_kb": 0, 00:13:00.245 "state": "online", 00:13:00.245 "raid_level": "raid1", 00:13:00.245 "superblock": false, 00:13:00.245 "num_base_bdevs": 4, 00:13:00.245 "num_base_bdevs_discovered": 3, 00:13:00.245 "num_base_bdevs_operational": 3, 00:13:00.245 "base_bdevs_list": [ 00:13:00.245 { 00:13:00.245 "name": null, 00:13:00.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.245 "is_configured": false, 00:13:00.245 "data_offset": 0, 00:13:00.245 "data_size": 65536 00:13:00.245 }, 00:13:00.245 { 00:13:00.245 "name": "BaseBdev2", 00:13:00.245 "uuid": "39c275e6-b976-5caf-9fcc-d7b841848585", 00:13:00.245 "is_configured": true, 00:13:00.245 "data_offset": 0, 00:13:00.245 "data_size": 65536 00:13:00.245 }, 00:13:00.245 { 00:13:00.245 "name": "BaseBdev3", 00:13:00.245 "uuid": "600343cb-9a31-5e33-8eca-062084b20c8c", 00:13:00.245 "is_configured": true, 00:13:00.245 "data_offset": 0, 00:13:00.245 "data_size": 65536 00:13:00.245 }, 00:13:00.245 { 00:13:00.245 "name": "BaseBdev4", 00:13:00.245 "uuid": "7b4c187d-cdf3-5f21-b732-4e4e7a476e2a", 00:13:00.245 "is_configured": true, 00:13:00.245 "data_offset": 0, 00:13:00.245 "data_size": 65536 00:13:00.245 } 00:13:00.245 ] 00:13:00.245 }' 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.245 13:25:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.245 13:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:00.245 13:25:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.245 13:25:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.245 [2024-11-26 13:25:48.054196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.245 [2024-11-26 13:25:48.065709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:00.245 13:25:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.246 13:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:00.246 [2024-11-26 13:25:48.067872] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:00.829 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.830 "name": "raid_bdev1", 00:13:00.830 "uuid": "87de9874-6b64-4e9f-9886-0160c4ad8424", 00:13:00.830 "strip_size_kb": 0, 00:13:00.830 "state": "online", 00:13:00.830 "raid_level": "raid1", 00:13:00.830 "superblock": false, 00:13:00.830 "num_base_bdevs": 4, 00:13:00.830 "num_base_bdevs_discovered": 4, 00:13:00.830 "num_base_bdevs_operational": 4, 00:13:00.830 "process": { 00:13:00.830 "type": "rebuild", 00:13:00.830 "target": "spare", 00:13:00.830 "progress": { 00:13:00.830 "blocks": 20480, 00:13:00.830 "percent": 31 00:13:00.830 } 00:13:00.830 }, 00:13:00.830 "base_bdevs_list": [ 00:13:00.830 { 00:13:00.830 "name": "spare", 00:13:00.830 "uuid": "73c77f0a-f4cf-5561-b9a8-9dbe1261db73", 00:13:00.830 "is_configured": true, 00:13:00.830 "data_offset": 0, 00:13:00.830 "data_size": 65536 00:13:00.830 }, 00:13:00.830 { 00:13:00.830 "name": "BaseBdev2", 00:13:00.830 "uuid": "39c275e6-b976-5caf-9fcc-d7b841848585", 00:13:00.830 "is_configured": true, 00:13:00.830 "data_offset": 0, 00:13:00.830 "data_size": 65536 00:13:00.830 }, 00:13:00.830 { 00:13:00.830 "name": "BaseBdev3", 00:13:00.830 "uuid": "600343cb-9a31-5e33-8eca-062084b20c8c", 00:13:00.830 "is_configured": true, 00:13:00.830 "data_offset": 0, 00:13:00.830 "data_size": 65536 00:13:00.830 }, 00:13:00.830 { 00:13:00.830 "name": "BaseBdev4", 00:13:00.830 "uuid": "7b4c187d-cdf3-5f21-b732-4e4e7a476e2a", 00:13:00.830 "is_configured": true, 00:13:00.830 "data_offset": 0, 00:13:00.830 "data_size": 65536 00:13:00.830 } 00:13:00.830 ] 00:13:00.830 }' 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.830 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.830 [2024-11-26 13:25:49.225493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.831 [2024-11-26 13:25:49.275135] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:00.831 [2024-11-26 13:25:49.275354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.831 [2024-11-26 13:25:49.275382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.831 [2024-11-26 13:25:49.275397] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.831 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.831 "name": "raid_bdev1", 00:13:00.831 "uuid": "87de9874-6b64-4e9f-9886-0160c4ad8424", 00:13:00.831 "strip_size_kb": 0, 00:13:00.831 "state": "online", 00:13:00.831 "raid_level": "raid1", 00:13:00.831 "superblock": false, 00:13:00.831 "num_base_bdevs": 4, 00:13:00.831 "num_base_bdevs_discovered": 3, 00:13:00.831 "num_base_bdevs_operational": 3, 00:13:00.831 "base_bdevs_list": [ 00:13:00.831 { 00:13:00.831 "name": null, 00:13:00.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.831 "is_configured": false, 00:13:00.831 "data_offset": 0, 00:13:00.831 "data_size": 65536 00:13:00.831 }, 00:13:00.831 { 00:13:00.832 "name": "BaseBdev2", 00:13:00.832 "uuid": "39c275e6-b976-5caf-9fcc-d7b841848585", 00:13:00.832 "is_configured": true, 00:13:00.832 "data_offset": 0, 00:13:00.832 "data_size": 65536 00:13:00.832 }, 00:13:00.832 { 00:13:00.832 "name": "BaseBdev3", 00:13:00.832 "uuid": "600343cb-9a31-5e33-8eca-062084b20c8c", 00:13:00.832 "is_configured": true, 00:13:00.832 "data_offset": 0, 00:13:00.832 "data_size": 65536 00:13:00.832 }, 00:13:00.832 { 00:13:00.832 "name": "BaseBdev4", 00:13:00.832 "uuid": "7b4c187d-cdf3-5f21-b732-4e4e7a476e2a", 00:13:00.832 "is_configured": true, 00:13:00.832 "data_offset": 0, 00:13:00.832 "data_size": 65536 00:13:00.832 } 00:13:00.832 ] 00:13:00.832 }' 00:13:00.832 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.832 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.402 "name": "raid_bdev1", 00:13:01.402 "uuid": "87de9874-6b64-4e9f-9886-0160c4ad8424", 00:13:01.402 "strip_size_kb": 0, 00:13:01.402 "state": "online", 00:13:01.402 "raid_level": "raid1", 00:13:01.402 "superblock": false, 00:13:01.402 "num_base_bdevs": 4, 00:13:01.402 "num_base_bdevs_discovered": 3, 00:13:01.402 "num_base_bdevs_operational": 3, 00:13:01.402 "base_bdevs_list": [ 00:13:01.402 { 00:13:01.402 "name": null, 00:13:01.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.402 "is_configured": false, 00:13:01.402 "data_offset": 0, 00:13:01.402 "data_size": 65536 00:13:01.402 }, 00:13:01.402 { 00:13:01.402 "name": "BaseBdev2", 00:13:01.402 "uuid": "39c275e6-b976-5caf-9fcc-d7b841848585", 00:13:01.402 "is_configured": true, 00:13:01.402 "data_offset": 0, 00:13:01.402 "data_size": 65536 00:13:01.402 }, 00:13:01.402 { 00:13:01.402 "name": "BaseBdev3", 00:13:01.402 "uuid": "600343cb-9a31-5e33-8eca-062084b20c8c", 00:13:01.402 "is_configured": true, 00:13:01.402 "data_offset": 0, 00:13:01.402 "data_size": 65536 00:13:01.402 }, 00:13:01.402 { 00:13:01.402 "name": "BaseBdev4", 00:13:01.402 "uuid": "7b4c187d-cdf3-5f21-b732-4e4e7a476e2a", 00:13:01.402 "is_configured": true, 00:13:01.402 "data_offset": 0, 00:13:01.402 "data_size": 65536 00:13:01.402 } 00:13:01.402 ] 00:13:01.402 }' 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.402 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.402 [2024-11-26 13:25:49.957718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.660 [2024-11-26 13:25:49.967518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:01.660 13:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.660 13:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:01.660 [2024-11-26 13:25:49.969630] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.597 13:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.597 13:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.597 13:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.597 13:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.597 13:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.597 13:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.597 13:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.597 13:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.597 13:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.597 13:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.597 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.597 "name": "raid_bdev1", 00:13:02.597 "uuid": "87de9874-6b64-4e9f-9886-0160c4ad8424", 00:13:02.597 "strip_size_kb": 0, 00:13:02.597 "state": "online", 00:13:02.597 "raid_level": "raid1", 00:13:02.597 "superblock": false, 00:13:02.597 "num_base_bdevs": 4, 00:13:02.597 "num_base_bdevs_discovered": 4, 00:13:02.597 "num_base_bdevs_operational": 4, 00:13:02.597 "process": { 00:13:02.597 "type": "rebuild", 00:13:02.597 "target": "spare", 00:13:02.597 "progress": { 00:13:02.597 "blocks": 20480, 00:13:02.597 "percent": 31 00:13:02.597 } 00:13:02.597 }, 00:13:02.597 "base_bdevs_list": [ 00:13:02.597 { 00:13:02.597 "name": "spare", 00:13:02.597 "uuid": "73c77f0a-f4cf-5561-b9a8-9dbe1261db73", 00:13:02.597 "is_configured": true, 00:13:02.597 "data_offset": 0, 00:13:02.597 "data_size": 65536 00:13:02.597 }, 00:13:02.597 { 00:13:02.597 "name": "BaseBdev2", 00:13:02.597 "uuid": "39c275e6-b976-5caf-9fcc-d7b841848585", 00:13:02.597 "is_configured": true, 00:13:02.597 "data_offset": 0, 00:13:02.597 "data_size": 65536 00:13:02.597 }, 00:13:02.597 { 00:13:02.597 "name": "BaseBdev3", 00:13:02.597 "uuid": "600343cb-9a31-5e33-8eca-062084b20c8c", 00:13:02.597 "is_configured": true, 00:13:02.597 "data_offset": 0, 00:13:02.597 "data_size": 65536 00:13:02.597 }, 00:13:02.597 { 00:13:02.597 "name": "BaseBdev4", 00:13:02.597 "uuid": "7b4c187d-cdf3-5f21-b732-4e4e7a476e2a", 00:13:02.597 "is_configured": true, 00:13:02.597 "data_offset": 0, 00:13:02.597 "data_size": 65536 00:13:02.597 } 00:13:02.597 ] 00:13:02.597 }' 00:13:02.597 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.597 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.597 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.597 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.597 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:02.597 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:02.597 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:02.597 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:02.597 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:02.597 13:25:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.597 13:25:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.597 [2024-11-26 13:25:51.131347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:02.856 [2024-11-26 13:25:51.175350] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.856 "name": "raid_bdev1", 00:13:02.856 "uuid": "87de9874-6b64-4e9f-9886-0160c4ad8424", 00:13:02.856 "strip_size_kb": 0, 00:13:02.856 "state": "online", 00:13:02.856 "raid_level": "raid1", 00:13:02.856 "superblock": false, 00:13:02.856 "num_base_bdevs": 4, 00:13:02.856 "num_base_bdevs_discovered": 3, 00:13:02.856 "num_base_bdevs_operational": 3, 00:13:02.856 "process": { 00:13:02.856 "type": "rebuild", 00:13:02.856 "target": "spare", 00:13:02.856 "progress": { 00:13:02.856 "blocks": 24576, 00:13:02.856 "percent": 37 00:13:02.856 } 00:13:02.856 }, 00:13:02.856 "base_bdevs_list": [ 00:13:02.856 { 00:13:02.856 "name": "spare", 00:13:02.856 "uuid": "73c77f0a-f4cf-5561-b9a8-9dbe1261db73", 00:13:02.856 "is_configured": true, 00:13:02.856 "data_offset": 0, 00:13:02.856 "data_size": 65536 00:13:02.856 }, 00:13:02.856 { 00:13:02.856 "name": null, 00:13:02.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.856 "is_configured": false, 00:13:02.856 "data_offset": 0, 00:13:02.856 "data_size": 65536 00:13:02.856 }, 00:13:02.856 { 00:13:02.856 "name": "BaseBdev3", 00:13:02.856 "uuid": "600343cb-9a31-5e33-8eca-062084b20c8c", 00:13:02.856 "is_configured": true, 00:13:02.856 "data_offset": 0, 00:13:02.856 "data_size": 65536 00:13:02.856 }, 00:13:02.856 { 00:13:02.856 "name": "BaseBdev4", 00:13:02.856 "uuid": "7b4c187d-cdf3-5f21-b732-4e4e7a476e2a", 00:13:02.856 "is_configured": true, 00:13:02.856 "data_offset": 0, 00:13:02.856 "data_size": 65536 00:13:02.856 } 00:13:02.856 ] 00:13:02.856 }' 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=453 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.856 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.857 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.857 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.857 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.857 13:25:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.857 13:25:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.857 13:25:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.857 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.857 "name": "raid_bdev1", 00:13:02.857 "uuid": "87de9874-6b64-4e9f-9886-0160c4ad8424", 00:13:02.857 "strip_size_kb": 0, 00:13:02.857 "state": "online", 00:13:02.857 "raid_level": "raid1", 00:13:02.857 "superblock": false, 00:13:02.857 "num_base_bdevs": 4, 00:13:02.857 "num_base_bdevs_discovered": 3, 00:13:02.857 "num_base_bdevs_operational": 3, 00:13:02.857 "process": { 00:13:02.857 "type": "rebuild", 00:13:02.857 "target": "spare", 00:13:02.857 "progress": { 00:13:02.857 "blocks": 26624, 00:13:02.857 "percent": 40 00:13:02.857 } 00:13:02.857 }, 00:13:02.857 "base_bdevs_list": [ 00:13:02.857 { 00:13:02.857 "name": "spare", 00:13:02.857 "uuid": "73c77f0a-f4cf-5561-b9a8-9dbe1261db73", 00:13:02.857 "is_configured": true, 00:13:02.857 "data_offset": 0, 00:13:02.857 "data_size": 65536 00:13:02.857 }, 00:13:02.857 { 00:13:02.857 "name": null, 00:13:02.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.857 "is_configured": false, 00:13:02.857 "data_offset": 0, 00:13:02.857 "data_size": 65536 00:13:02.857 }, 00:13:02.857 { 00:13:02.857 "name": "BaseBdev3", 00:13:02.857 "uuid": "600343cb-9a31-5e33-8eca-062084b20c8c", 00:13:02.857 "is_configured": true, 00:13:02.857 "data_offset": 0, 00:13:02.857 "data_size": 65536 00:13:02.857 }, 00:13:02.857 { 00:13:02.857 "name": "BaseBdev4", 00:13:02.857 "uuid": "7b4c187d-cdf3-5f21-b732-4e4e7a476e2a", 00:13:02.857 "is_configured": true, 00:13:02.857 "data_offset": 0, 00:13:02.857 "data_size": 65536 00:13:02.857 } 00:13:02.857 ] 00:13:02.857 }' 00:13:02.857 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.115 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.115 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.115 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.115 13:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.051 13:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:04.051 13:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.051 13:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.051 13:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.051 13:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.051 13:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.051 13:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.051 13:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.051 13:25:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.052 13:25:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.052 13:25:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.052 13:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.052 "name": "raid_bdev1", 00:13:04.052 "uuid": "87de9874-6b64-4e9f-9886-0160c4ad8424", 00:13:04.052 "strip_size_kb": 0, 00:13:04.052 "state": "online", 00:13:04.052 "raid_level": "raid1", 00:13:04.052 "superblock": false, 00:13:04.052 "num_base_bdevs": 4, 00:13:04.052 "num_base_bdevs_discovered": 3, 00:13:04.052 "num_base_bdevs_operational": 3, 00:13:04.052 "process": { 00:13:04.052 "type": "rebuild", 00:13:04.052 "target": "spare", 00:13:04.052 "progress": { 00:13:04.052 "blocks": 51200, 00:13:04.052 "percent": 78 00:13:04.052 } 00:13:04.052 }, 00:13:04.052 "base_bdevs_list": [ 00:13:04.052 { 00:13:04.052 "name": "spare", 00:13:04.052 "uuid": "73c77f0a-f4cf-5561-b9a8-9dbe1261db73", 00:13:04.052 "is_configured": true, 00:13:04.052 "data_offset": 0, 00:13:04.052 "data_size": 65536 00:13:04.052 }, 00:13:04.052 { 00:13:04.052 "name": null, 00:13:04.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.052 "is_configured": false, 00:13:04.052 "data_offset": 0, 00:13:04.052 "data_size": 65536 00:13:04.052 }, 00:13:04.052 { 00:13:04.052 "name": "BaseBdev3", 00:13:04.052 "uuid": "600343cb-9a31-5e33-8eca-062084b20c8c", 00:13:04.052 "is_configured": true, 00:13:04.052 "data_offset": 0, 00:13:04.052 "data_size": 65536 00:13:04.052 }, 00:13:04.052 { 00:13:04.052 "name": "BaseBdev4", 00:13:04.052 "uuid": "7b4c187d-cdf3-5f21-b732-4e4e7a476e2a", 00:13:04.052 "is_configured": true, 00:13:04.052 "data_offset": 0, 00:13:04.052 "data_size": 65536 00:13:04.052 } 00:13:04.052 ] 00:13:04.052 }' 00:13:04.052 13:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.311 13:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.311 13:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.311 13:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.311 13:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.879 [2024-11-26 13:25:53.186406] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:04.879 [2024-11-26 13:25:53.186476] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:04.879 [2024-11-26 13:25:53.186538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.138 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:05.138 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.138 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.138 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.138 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.138 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.138 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.138 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.138 13:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.138 13:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.138 13:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.398 "name": "raid_bdev1", 00:13:05.398 "uuid": "87de9874-6b64-4e9f-9886-0160c4ad8424", 00:13:05.398 "strip_size_kb": 0, 00:13:05.398 "state": "online", 00:13:05.398 "raid_level": "raid1", 00:13:05.398 "superblock": false, 00:13:05.398 "num_base_bdevs": 4, 00:13:05.398 "num_base_bdevs_discovered": 3, 00:13:05.398 "num_base_bdevs_operational": 3, 00:13:05.398 "base_bdevs_list": [ 00:13:05.398 { 00:13:05.398 "name": "spare", 00:13:05.398 "uuid": "73c77f0a-f4cf-5561-b9a8-9dbe1261db73", 00:13:05.398 "is_configured": true, 00:13:05.398 "data_offset": 0, 00:13:05.398 "data_size": 65536 00:13:05.398 }, 00:13:05.398 { 00:13:05.398 "name": null, 00:13:05.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.398 "is_configured": false, 00:13:05.398 "data_offset": 0, 00:13:05.398 "data_size": 65536 00:13:05.398 }, 00:13:05.398 { 00:13:05.398 "name": "BaseBdev3", 00:13:05.398 "uuid": "600343cb-9a31-5e33-8eca-062084b20c8c", 00:13:05.398 "is_configured": true, 00:13:05.398 "data_offset": 0, 00:13:05.398 "data_size": 65536 00:13:05.398 }, 00:13:05.398 { 00:13:05.398 "name": "BaseBdev4", 00:13:05.398 "uuid": "7b4c187d-cdf3-5f21-b732-4e4e7a476e2a", 00:13:05.398 "is_configured": true, 00:13:05.398 "data_offset": 0, 00:13:05.398 "data_size": 65536 00:13:05.398 } 00:13:05.398 ] 00:13:05.398 }' 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.398 "name": "raid_bdev1", 00:13:05.398 "uuid": "87de9874-6b64-4e9f-9886-0160c4ad8424", 00:13:05.398 "strip_size_kb": 0, 00:13:05.398 "state": "online", 00:13:05.398 "raid_level": "raid1", 00:13:05.398 "superblock": false, 00:13:05.398 "num_base_bdevs": 4, 00:13:05.398 "num_base_bdevs_discovered": 3, 00:13:05.398 "num_base_bdevs_operational": 3, 00:13:05.398 "base_bdevs_list": [ 00:13:05.398 { 00:13:05.398 "name": "spare", 00:13:05.398 "uuid": "73c77f0a-f4cf-5561-b9a8-9dbe1261db73", 00:13:05.398 "is_configured": true, 00:13:05.398 "data_offset": 0, 00:13:05.398 "data_size": 65536 00:13:05.398 }, 00:13:05.398 { 00:13:05.398 "name": null, 00:13:05.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.398 "is_configured": false, 00:13:05.398 "data_offset": 0, 00:13:05.398 "data_size": 65536 00:13:05.398 }, 00:13:05.398 { 00:13:05.398 "name": "BaseBdev3", 00:13:05.398 "uuid": "600343cb-9a31-5e33-8eca-062084b20c8c", 00:13:05.398 "is_configured": true, 00:13:05.398 "data_offset": 0, 00:13:05.398 "data_size": 65536 00:13:05.398 }, 00:13:05.398 { 00:13:05.398 "name": "BaseBdev4", 00:13:05.398 "uuid": "7b4c187d-cdf3-5f21-b732-4e4e7a476e2a", 00:13:05.398 "is_configured": true, 00:13:05.398 "data_offset": 0, 00:13:05.398 "data_size": 65536 00:13:05.398 } 00:13:05.398 ] 00:13:05.398 }' 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.398 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.657 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.657 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:05.657 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.657 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.657 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.657 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.657 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.657 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.657 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.657 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.657 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.657 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.657 13:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.657 13:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.657 13:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.657 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.657 13:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.657 "name": "raid_bdev1", 00:13:05.657 "uuid": "87de9874-6b64-4e9f-9886-0160c4ad8424", 00:13:05.657 "strip_size_kb": 0, 00:13:05.657 "state": "online", 00:13:05.657 "raid_level": "raid1", 00:13:05.657 "superblock": false, 00:13:05.657 "num_base_bdevs": 4, 00:13:05.657 "num_base_bdevs_discovered": 3, 00:13:05.657 "num_base_bdevs_operational": 3, 00:13:05.657 "base_bdevs_list": [ 00:13:05.657 { 00:13:05.657 "name": "spare", 00:13:05.658 "uuid": "73c77f0a-f4cf-5561-b9a8-9dbe1261db73", 00:13:05.658 "is_configured": true, 00:13:05.658 "data_offset": 0, 00:13:05.658 "data_size": 65536 00:13:05.658 }, 00:13:05.658 { 00:13:05.658 "name": null, 00:13:05.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.658 "is_configured": false, 00:13:05.658 "data_offset": 0, 00:13:05.658 "data_size": 65536 00:13:05.658 }, 00:13:05.658 { 00:13:05.658 "name": "BaseBdev3", 00:13:05.658 "uuid": "600343cb-9a31-5e33-8eca-062084b20c8c", 00:13:05.658 "is_configured": true, 00:13:05.658 "data_offset": 0, 00:13:05.658 "data_size": 65536 00:13:05.658 }, 00:13:05.658 { 00:13:05.658 "name": "BaseBdev4", 00:13:05.658 "uuid": "7b4c187d-cdf3-5f21-b732-4e4e7a476e2a", 00:13:05.658 "is_configured": true, 00:13:05.658 "data_offset": 0, 00:13:05.658 "data_size": 65536 00:13:05.658 } 00:13:05.658 ] 00:13:05.658 }' 00:13:05.658 13:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.658 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.225 13:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:06.225 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.225 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.225 [2024-11-26 13:25:54.500745] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.225 [2024-11-26 13:25:54.500908] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.225 [2024-11-26 13:25:54.501078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.225 [2024-11-26 13:25:54.501311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.225 [2024-11-26 13:25:54.501336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:06.225 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.225 13:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.225 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.225 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.225 13:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:06.225 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.225 13:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:06.225 13:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:06.225 13:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:06.226 13:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:06.226 13:25:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.226 13:25:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:06.226 13:25:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:06.226 13:25:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:06.226 13:25:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:06.226 13:25:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:06.226 13:25:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:06.226 13:25:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.226 13:25:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:06.485 /dev/nbd0 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.485 1+0 records in 00:13:06.485 1+0 records out 00:13:06.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579063 s, 7.1 MB/s 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.485 13:25:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:06.764 /dev/nbd1 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.764 1+0 records in 00:13:06.764 1+0 records out 00:13:06.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358605 s, 11.4 MB/s 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.764 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:07.024 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:07.024 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:07.024 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:07.024 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.024 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.024 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:07.024 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:07.024 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.024 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.024 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:07.284 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:07.284 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:07.284 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:07.284 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.284 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.284 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:07.542 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:07.543 13:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.543 13:25:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:07.543 13:25:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77127 00:13:07.543 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77127 ']' 00:13:07.543 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77127 00:13:07.543 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:07.543 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.543 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77127 00:13:07.543 killing process with pid 77127 00:13:07.543 Received shutdown signal, test time was about 60.000000 seconds 00:13:07.543 00:13:07.543 Latency(us) 00:13:07.543 [2024-11-26T13:25:56.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.543 [2024-11-26T13:25:56.113Z] =================================================================================================================== 00:13:07.543 [2024-11-26T13:25:56.113Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:07.543 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.543 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.543 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77127' 00:13:07.543 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77127 00:13:07.543 [2024-11-26 13:25:55.882504] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.543 13:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77127 00:13:07.801 [2024-11-26 13:25:56.210357] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:08.739 00:13:08.739 real 0m18.746s 00:13:08.739 user 0m20.806s 00:13:08.739 sys 0m3.297s 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.739 ************************************ 00:13:08.739 END TEST raid_rebuild_test 00:13:08.739 ************************************ 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.739 13:25:57 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:08.739 13:25:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:08.739 13:25:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.739 13:25:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.739 ************************************ 00:13:08.739 START TEST raid_rebuild_test_sb 00:13:08.739 ************************************ 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77585 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77585 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77585 ']' 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.739 13:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.740 13:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.740 13:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.740 [2024-11-26 13:25:57.185953] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:13:08.740 [2024-11-26 13:25:57.186287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77585 ] 00:13:08.740 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:08.740 Zero copy mechanism will not be used. 00:13:08.999 [2024-11-26 13:25:57.348885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.999 [2024-11-26 13:25:57.446865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.258 [2024-11-26 13:25:57.616984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.258 [2024-11-26 13:25:57.617040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.827 BaseBdev1_malloc 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.827 [2024-11-26 13:25:58.193460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:09.827 [2024-11-26 13:25:58.193533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.827 [2024-11-26 13:25:58.193563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:09.827 [2024-11-26 13:25:58.193580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.827 [2024-11-26 13:25:58.195871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.827 [2024-11-26 13:25:58.195918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:09.827 BaseBdev1 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.827 BaseBdev2_malloc 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.827 [2024-11-26 13:25:58.235109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:09.827 [2024-11-26 13:25:58.235169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.827 [2024-11-26 13:25:58.235192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:09.827 [2024-11-26 13:25:58.235209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.827 [2024-11-26 13:25:58.237423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.827 [2024-11-26 13:25:58.237615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:09.827 BaseBdev2 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.827 BaseBdev3_malloc 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.827 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.827 [2024-11-26 13:25:58.288004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:09.827 [2024-11-26 13:25:58.288069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.827 [2024-11-26 13:25:58.288096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:09.827 [2024-11-26 13:25:58.288112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.827 [2024-11-26 13:25:58.290359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.827 [2024-11-26 13:25:58.290404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:09.827 BaseBdev3 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.828 BaseBdev4_malloc 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.828 [2024-11-26 13:25:58.333768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:09.828 [2024-11-26 13:25:58.333957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.828 [2024-11-26 13:25:58.333991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:09.828 [2024-11-26 13:25:58.334009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.828 [2024-11-26 13:25:58.336280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.828 [2024-11-26 13:25:58.336323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:09.828 BaseBdev4 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.828 spare_malloc 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.828 spare_delay 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.828 [2024-11-26 13:25:58.383403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:09.828 [2024-11-26 13:25:58.383464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.828 [2024-11-26 13:25:58.383489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:09.828 [2024-11-26 13:25:58.383503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.828 [2024-11-26 13:25:58.385753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.828 [2024-11-26 13:25:58.385931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:09.828 spare 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.828 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.087 [2024-11-26 13:25:58.391451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.087 [2024-11-26 13:25:58.393446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:10.087 [2024-11-26 13:25:58.393532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:10.087 [2024-11-26 13:25:58.393605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:10.087 [2024-11-26 13:25:58.393810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:10.087 [2024-11-26 13:25:58.393835] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:10.087 [2024-11-26 13:25:58.394087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:10.087 [2024-11-26 13:25:58.394307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:10.087 [2024-11-26 13:25:58.394322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:10.087 [2024-11-26 13:25:58.394480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.087 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.087 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:10.087 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.087 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.087 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.087 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.087 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.087 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.087 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.087 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.087 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.087 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.087 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.087 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.087 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.088 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.088 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.088 "name": "raid_bdev1", 00:13:10.088 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:10.088 "strip_size_kb": 0, 00:13:10.088 "state": "online", 00:13:10.088 "raid_level": "raid1", 00:13:10.088 "superblock": true, 00:13:10.088 "num_base_bdevs": 4, 00:13:10.088 "num_base_bdevs_discovered": 4, 00:13:10.088 "num_base_bdevs_operational": 4, 00:13:10.088 "base_bdevs_list": [ 00:13:10.088 { 00:13:10.088 "name": "BaseBdev1", 00:13:10.088 "uuid": "eed9e53d-2eda-5b20-b8cb-1ace088ed4b7", 00:13:10.088 "is_configured": true, 00:13:10.088 "data_offset": 2048, 00:13:10.088 "data_size": 63488 00:13:10.088 }, 00:13:10.088 { 00:13:10.088 "name": "BaseBdev2", 00:13:10.088 "uuid": "b021d977-abb7-5dd1-b1ef-da6bc7777fdb", 00:13:10.088 "is_configured": true, 00:13:10.088 "data_offset": 2048, 00:13:10.088 "data_size": 63488 00:13:10.088 }, 00:13:10.088 { 00:13:10.088 "name": "BaseBdev3", 00:13:10.088 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:10.088 "is_configured": true, 00:13:10.088 "data_offset": 2048, 00:13:10.088 "data_size": 63488 00:13:10.088 }, 00:13:10.088 { 00:13:10.088 "name": "BaseBdev4", 00:13:10.088 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:10.088 "is_configured": true, 00:13:10.088 "data_offset": 2048, 00:13:10.088 "data_size": 63488 00:13:10.088 } 00:13:10.088 ] 00:13:10.088 }' 00:13:10.088 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.088 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.348 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:10.348 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:10.348 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.348 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.348 [2024-11-26 13:25:58.903827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.607 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.607 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:10.607 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:10.607 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.608 13:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.608 13:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:10.865 [2024-11-26 13:25:59.263619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:10.865 /dev/nbd0 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.865 1+0 records in 00:13:10.865 1+0 records out 00:13:10.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398947 s, 10.3 MB/s 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:10.865 13:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:17.426 63488+0 records in 00:13:17.426 63488+0 records out 00:13:17.426 32505856 bytes (33 MB, 31 MiB) copied, 6.6133 s, 4.9 MB/s 00:13:17.426 13:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:17.426 13:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.426 13:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:17.426 13:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:17.426 13:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:17.426 13:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.426 13:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:17.685 [2024-11-26 13:26:06.185097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.685 [2024-11-26 13:26:06.214026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.685 13:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.943 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.943 "name": "raid_bdev1", 00:13:17.943 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:17.943 "strip_size_kb": 0, 00:13:17.943 "state": "online", 00:13:17.943 "raid_level": "raid1", 00:13:17.943 "superblock": true, 00:13:17.943 "num_base_bdevs": 4, 00:13:17.943 "num_base_bdevs_discovered": 3, 00:13:17.943 "num_base_bdevs_operational": 3, 00:13:17.943 "base_bdevs_list": [ 00:13:17.943 { 00:13:17.943 "name": null, 00:13:17.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.943 "is_configured": false, 00:13:17.943 "data_offset": 0, 00:13:17.943 "data_size": 63488 00:13:17.943 }, 00:13:17.943 { 00:13:17.943 "name": "BaseBdev2", 00:13:17.943 "uuid": "b021d977-abb7-5dd1-b1ef-da6bc7777fdb", 00:13:17.943 "is_configured": true, 00:13:17.943 "data_offset": 2048, 00:13:17.943 "data_size": 63488 00:13:17.943 }, 00:13:17.943 { 00:13:17.943 "name": "BaseBdev3", 00:13:17.943 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:17.943 "is_configured": true, 00:13:17.943 "data_offset": 2048, 00:13:17.943 "data_size": 63488 00:13:17.943 }, 00:13:17.943 { 00:13:17.943 "name": "BaseBdev4", 00:13:17.943 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:17.943 "is_configured": true, 00:13:17.943 "data_offset": 2048, 00:13:17.943 "data_size": 63488 00:13:17.943 } 00:13:17.943 ] 00:13:17.943 }' 00:13:17.943 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.943 13:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.201 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.201 13:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.201 13:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.201 [2024-11-26 13:26:06.678116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.201 [2024-11-26 13:26:06.689582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:18.202 13:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.202 13:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:18.202 [2024-11-26 13:26:06.691593] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:19.138 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.138 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.138 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.138 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.138 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.138 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.138 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.138 13:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.138 13:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.396 "name": "raid_bdev1", 00:13:19.396 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:19.396 "strip_size_kb": 0, 00:13:19.396 "state": "online", 00:13:19.396 "raid_level": "raid1", 00:13:19.396 "superblock": true, 00:13:19.396 "num_base_bdevs": 4, 00:13:19.396 "num_base_bdevs_discovered": 4, 00:13:19.396 "num_base_bdevs_operational": 4, 00:13:19.396 "process": { 00:13:19.396 "type": "rebuild", 00:13:19.396 "target": "spare", 00:13:19.396 "progress": { 00:13:19.396 "blocks": 20480, 00:13:19.396 "percent": 32 00:13:19.396 } 00:13:19.396 }, 00:13:19.396 "base_bdevs_list": [ 00:13:19.396 { 00:13:19.396 "name": "spare", 00:13:19.396 "uuid": "0d4ae951-d32a-5bb4-9290-6a1ba3fe3062", 00:13:19.396 "is_configured": true, 00:13:19.396 "data_offset": 2048, 00:13:19.396 "data_size": 63488 00:13:19.396 }, 00:13:19.396 { 00:13:19.396 "name": "BaseBdev2", 00:13:19.396 "uuid": "b021d977-abb7-5dd1-b1ef-da6bc7777fdb", 00:13:19.396 "is_configured": true, 00:13:19.396 "data_offset": 2048, 00:13:19.396 "data_size": 63488 00:13:19.396 }, 00:13:19.396 { 00:13:19.396 "name": "BaseBdev3", 00:13:19.396 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:19.396 "is_configured": true, 00:13:19.396 "data_offset": 2048, 00:13:19.396 "data_size": 63488 00:13:19.396 }, 00:13:19.396 { 00:13:19.396 "name": "BaseBdev4", 00:13:19.396 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:19.396 "is_configured": true, 00:13:19.396 "data_offset": 2048, 00:13:19.396 "data_size": 63488 00:13:19.396 } 00:13:19.396 ] 00:13:19.396 }' 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.396 [2024-11-26 13:26:07.861263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.396 [2024-11-26 13:26:07.898901] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:19.396 [2024-11-26 13:26:07.898967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.396 [2024-11-26 13:26:07.898988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.396 [2024-11-26 13:26:07.899000] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.396 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.397 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.397 13:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.397 13:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.397 13:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.655 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.655 "name": "raid_bdev1", 00:13:19.655 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:19.655 "strip_size_kb": 0, 00:13:19.655 "state": "online", 00:13:19.655 "raid_level": "raid1", 00:13:19.655 "superblock": true, 00:13:19.655 "num_base_bdevs": 4, 00:13:19.655 "num_base_bdevs_discovered": 3, 00:13:19.655 "num_base_bdevs_operational": 3, 00:13:19.655 "base_bdevs_list": [ 00:13:19.655 { 00:13:19.655 "name": null, 00:13:19.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.655 "is_configured": false, 00:13:19.655 "data_offset": 0, 00:13:19.655 "data_size": 63488 00:13:19.655 }, 00:13:19.655 { 00:13:19.655 "name": "BaseBdev2", 00:13:19.655 "uuid": "b021d977-abb7-5dd1-b1ef-da6bc7777fdb", 00:13:19.655 "is_configured": true, 00:13:19.655 "data_offset": 2048, 00:13:19.655 "data_size": 63488 00:13:19.655 }, 00:13:19.655 { 00:13:19.655 "name": "BaseBdev3", 00:13:19.655 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:19.655 "is_configured": true, 00:13:19.655 "data_offset": 2048, 00:13:19.655 "data_size": 63488 00:13:19.655 }, 00:13:19.655 { 00:13:19.656 "name": "BaseBdev4", 00:13:19.656 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:19.656 "is_configured": true, 00:13:19.656 "data_offset": 2048, 00:13:19.656 "data_size": 63488 00:13:19.656 } 00:13:19.656 ] 00:13:19.656 }' 00:13:19.656 13:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.656 13:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.914 13:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:19.914 13:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.914 13:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:19.914 13:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:19.914 13:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.914 13:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.914 13:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.914 13:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.914 13:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.914 13:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.914 13:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.914 "name": "raid_bdev1", 00:13:19.914 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:19.914 "strip_size_kb": 0, 00:13:19.914 "state": "online", 00:13:19.914 "raid_level": "raid1", 00:13:19.914 "superblock": true, 00:13:19.914 "num_base_bdevs": 4, 00:13:19.914 "num_base_bdevs_discovered": 3, 00:13:19.914 "num_base_bdevs_operational": 3, 00:13:19.914 "base_bdevs_list": [ 00:13:19.914 { 00:13:19.914 "name": null, 00:13:19.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.914 "is_configured": false, 00:13:19.914 "data_offset": 0, 00:13:19.915 "data_size": 63488 00:13:19.915 }, 00:13:19.915 { 00:13:19.915 "name": "BaseBdev2", 00:13:19.915 "uuid": "b021d977-abb7-5dd1-b1ef-da6bc7777fdb", 00:13:19.915 "is_configured": true, 00:13:19.915 "data_offset": 2048, 00:13:19.915 "data_size": 63488 00:13:19.915 }, 00:13:19.915 { 00:13:19.915 "name": "BaseBdev3", 00:13:19.915 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:19.915 "is_configured": true, 00:13:19.915 "data_offset": 2048, 00:13:19.915 "data_size": 63488 00:13:19.915 }, 00:13:19.915 { 00:13:19.915 "name": "BaseBdev4", 00:13:19.915 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:19.915 "is_configured": true, 00:13:19.915 "data_offset": 2048, 00:13:19.915 "data_size": 63488 00:13:19.915 } 00:13:19.915 ] 00:13:19.915 }' 00:13:19.915 13:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.174 13:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:20.174 13:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.174 13:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:20.174 13:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:20.174 13:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.174 13:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.174 [2024-11-26 13:26:08.577558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.174 [2024-11-26 13:26:08.587142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:20.174 13:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.174 13:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:20.174 [2024-11-26 13:26:08.589317] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:21.109 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.109 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.109 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.109 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.109 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.109 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.109 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.109 13:26:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.109 13:26:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.109 13:26:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.109 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.109 "name": "raid_bdev1", 00:13:21.109 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:21.109 "strip_size_kb": 0, 00:13:21.109 "state": "online", 00:13:21.109 "raid_level": "raid1", 00:13:21.109 "superblock": true, 00:13:21.109 "num_base_bdevs": 4, 00:13:21.109 "num_base_bdevs_discovered": 4, 00:13:21.109 "num_base_bdevs_operational": 4, 00:13:21.109 "process": { 00:13:21.109 "type": "rebuild", 00:13:21.109 "target": "spare", 00:13:21.109 "progress": { 00:13:21.110 "blocks": 20480, 00:13:21.110 "percent": 32 00:13:21.110 } 00:13:21.110 }, 00:13:21.110 "base_bdevs_list": [ 00:13:21.110 { 00:13:21.110 "name": "spare", 00:13:21.110 "uuid": "0d4ae951-d32a-5bb4-9290-6a1ba3fe3062", 00:13:21.110 "is_configured": true, 00:13:21.110 "data_offset": 2048, 00:13:21.110 "data_size": 63488 00:13:21.110 }, 00:13:21.110 { 00:13:21.110 "name": "BaseBdev2", 00:13:21.110 "uuid": "b021d977-abb7-5dd1-b1ef-da6bc7777fdb", 00:13:21.110 "is_configured": true, 00:13:21.110 "data_offset": 2048, 00:13:21.110 "data_size": 63488 00:13:21.110 }, 00:13:21.110 { 00:13:21.110 "name": "BaseBdev3", 00:13:21.110 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:21.110 "is_configured": true, 00:13:21.110 "data_offset": 2048, 00:13:21.110 "data_size": 63488 00:13:21.110 }, 00:13:21.110 { 00:13:21.110 "name": "BaseBdev4", 00:13:21.110 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:21.110 "is_configured": true, 00:13:21.110 "data_offset": 2048, 00:13:21.110 "data_size": 63488 00:13:21.110 } 00:13:21.110 ] 00:13:21.110 }' 00:13:21.110 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.369 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.369 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.369 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.369 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:21.369 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:21.370 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.370 [2024-11-26 13:26:09.754956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:21.370 [2024-11-26 13:26:09.895501] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.370 13:26:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.629 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.629 "name": "raid_bdev1", 00:13:21.629 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:21.629 "strip_size_kb": 0, 00:13:21.629 "state": "online", 00:13:21.629 "raid_level": "raid1", 00:13:21.629 "superblock": true, 00:13:21.629 "num_base_bdevs": 4, 00:13:21.629 "num_base_bdevs_discovered": 3, 00:13:21.629 "num_base_bdevs_operational": 3, 00:13:21.629 "process": { 00:13:21.629 "type": "rebuild", 00:13:21.629 "target": "spare", 00:13:21.629 "progress": { 00:13:21.629 "blocks": 24576, 00:13:21.629 "percent": 38 00:13:21.629 } 00:13:21.629 }, 00:13:21.629 "base_bdevs_list": [ 00:13:21.629 { 00:13:21.629 "name": "spare", 00:13:21.629 "uuid": "0d4ae951-d32a-5bb4-9290-6a1ba3fe3062", 00:13:21.629 "is_configured": true, 00:13:21.629 "data_offset": 2048, 00:13:21.629 "data_size": 63488 00:13:21.629 }, 00:13:21.629 { 00:13:21.629 "name": null, 00:13:21.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.629 "is_configured": false, 00:13:21.629 "data_offset": 0, 00:13:21.629 "data_size": 63488 00:13:21.629 }, 00:13:21.629 { 00:13:21.629 "name": "BaseBdev3", 00:13:21.629 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:21.629 "is_configured": true, 00:13:21.629 "data_offset": 2048, 00:13:21.629 "data_size": 63488 00:13:21.629 }, 00:13:21.629 { 00:13:21.629 "name": "BaseBdev4", 00:13:21.629 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:21.629 "is_configured": true, 00:13:21.629 "data_offset": 2048, 00:13:21.629 "data_size": 63488 00:13:21.629 } 00:13:21.629 ] 00:13:21.629 }' 00:13:21.629 13:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=472 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.629 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.629 "name": "raid_bdev1", 00:13:21.629 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:21.629 "strip_size_kb": 0, 00:13:21.629 "state": "online", 00:13:21.629 "raid_level": "raid1", 00:13:21.629 "superblock": true, 00:13:21.630 "num_base_bdevs": 4, 00:13:21.630 "num_base_bdevs_discovered": 3, 00:13:21.630 "num_base_bdevs_operational": 3, 00:13:21.630 "process": { 00:13:21.630 "type": "rebuild", 00:13:21.630 "target": "spare", 00:13:21.630 "progress": { 00:13:21.630 "blocks": 26624, 00:13:21.630 "percent": 41 00:13:21.630 } 00:13:21.630 }, 00:13:21.630 "base_bdevs_list": [ 00:13:21.630 { 00:13:21.630 "name": "spare", 00:13:21.630 "uuid": "0d4ae951-d32a-5bb4-9290-6a1ba3fe3062", 00:13:21.630 "is_configured": true, 00:13:21.630 "data_offset": 2048, 00:13:21.630 "data_size": 63488 00:13:21.630 }, 00:13:21.630 { 00:13:21.630 "name": null, 00:13:21.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.630 "is_configured": false, 00:13:21.630 "data_offset": 0, 00:13:21.630 "data_size": 63488 00:13:21.630 }, 00:13:21.630 { 00:13:21.630 "name": "BaseBdev3", 00:13:21.630 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:21.630 "is_configured": true, 00:13:21.630 "data_offset": 2048, 00:13:21.630 "data_size": 63488 00:13:21.630 }, 00:13:21.630 { 00:13:21.630 "name": "BaseBdev4", 00:13:21.630 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:21.630 "is_configured": true, 00:13:21.630 "data_offset": 2048, 00:13:21.630 "data_size": 63488 00:13:21.630 } 00:13:21.630 ] 00:13:21.630 }' 00:13:21.630 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.630 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.630 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.888 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.888 13:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.825 "name": "raid_bdev1", 00:13:22.825 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:22.825 "strip_size_kb": 0, 00:13:22.825 "state": "online", 00:13:22.825 "raid_level": "raid1", 00:13:22.825 "superblock": true, 00:13:22.825 "num_base_bdevs": 4, 00:13:22.825 "num_base_bdevs_discovered": 3, 00:13:22.825 "num_base_bdevs_operational": 3, 00:13:22.825 "process": { 00:13:22.825 "type": "rebuild", 00:13:22.825 "target": "spare", 00:13:22.825 "progress": { 00:13:22.825 "blocks": 51200, 00:13:22.825 "percent": 80 00:13:22.825 } 00:13:22.825 }, 00:13:22.825 "base_bdevs_list": [ 00:13:22.825 { 00:13:22.825 "name": "spare", 00:13:22.825 "uuid": "0d4ae951-d32a-5bb4-9290-6a1ba3fe3062", 00:13:22.825 "is_configured": true, 00:13:22.825 "data_offset": 2048, 00:13:22.825 "data_size": 63488 00:13:22.825 }, 00:13:22.825 { 00:13:22.825 "name": null, 00:13:22.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.825 "is_configured": false, 00:13:22.825 "data_offset": 0, 00:13:22.825 "data_size": 63488 00:13:22.825 }, 00:13:22.825 { 00:13:22.825 "name": "BaseBdev3", 00:13:22.825 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:22.825 "is_configured": true, 00:13:22.825 "data_offset": 2048, 00:13:22.825 "data_size": 63488 00:13:22.825 }, 00:13:22.825 { 00:13:22.825 "name": "BaseBdev4", 00:13:22.825 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:22.825 "is_configured": true, 00:13:22.825 "data_offset": 2048, 00:13:22.825 "data_size": 63488 00:13:22.825 } 00:13:22.825 ] 00:13:22.825 }' 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.825 13:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:23.393 [2024-11-26 13:26:11.806050] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:23.393 [2024-11-26 13:26:11.806118] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:23.393 [2024-11-26 13:26:11.806259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.976 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.976 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.976 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.976 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.976 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.976 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.976 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.976 13:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.976 13:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.976 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.976 13:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.976 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.976 "name": "raid_bdev1", 00:13:23.976 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:23.976 "strip_size_kb": 0, 00:13:23.976 "state": "online", 00:13:23.976 "raid_level": "raid1", 00:13:23.976 "superblock": true, 00:13:23.976 "num_base_bdevs": 4, 00:13:23.976 "num_base_bdevs_discovered": 3, 00:13:23.976 "num_base_bdevs_operational": 3, 00:13:23.976 "base_bdevs_list": [ 00:13:23.976 { 00:13:23.976 "name": "spare", 00:13:23.976 "uuid": "0d4ae951-d32a-5bb4-9290-6a1ba3fe3062", 00:13:23.976 "is_configured": true, 00:13:23.976 "data_offset": 2048, 00:13:23.976 "data_size": 63488 00:13:23.976 }, 00:13:23.976 { 00:13:23.976 "name": null, 00:13:23.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.976 "is_configured": false, 00:13:23.976 "data_offset": 0, 00:13:23.976 "data_size": 63488 00:13:23.976 }, 00:13:23.976 { 00:13:23.976 "name": "BaseBdev3", 00:13:23.976 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:23.976 "is_configured": true, 00:13:23.976 "data_offset": 2048, 00:13:23.976 "data_size": 63488 00:13:23.976 }, 00:13:23.976 { 00:13:23.976 "name": "BaseBdev4", 00:13:23.976 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:23.976 "is_configured": true, 00:13:23.976 "data_offset": 2048, 00:13:23.976 "data_size": 63488 00:13:23.976 } 00:13:23.976 ] 00:13:23.976 }' 00:13:23.976 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.976 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:23.976 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.275 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:24.275 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:24.275 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.275 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.275 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.275 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.275 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.275 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.275 13:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.275 13:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.275 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.275 13:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.275 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.275 "name": "raid_bdev1", 00:13:24.275 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:24.275 "strip_size_kb": 0, 00:13:24.275 "state": "online", 00:13:24.275 "raid_level": "raid1", 00:13:24.275 "superblock": true, 00:13:24.275 "num_base_bdevs": 4, 00:13:24.275 "num_base_bdevs_discovered": 3, 00:13:24.275 "num_base_bdevs_operational": 3, 00:13:24.275 "base_bdevs_list": [ 00:13:24.275 { 00:13:24.275 "name": "spare", 00:13:24.275 "uuid": "0d4ae951-d32a-5bb4-9290-6a1ba3fe3062", 00:13:24.275 "is_configured": true, 00:13:24.275 "data_offset": 2048, 00:13:24.275 "data_size": 63488 00:13:24.275 }, 00:13:24.275 { 00:13:24.275 "name": null, 00:13:24.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.275 "is_configured": false, 00:13:24.275 "data_offset": 0, 00:13:24.275 "data_size": 63488 00:13:24.275 }, 00:13:24.275 { 00:13:24.275 "name": "BaseBdev3", 00:13:24.275 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:24.275 "is_configured": true, 00:13:24.276 "data_offset": 2048, 00:13:24.276 "data_size": 63488 00:13:24.276 }, 00:13:24.276 { 00:13:24.276 "name": "BaseBdev4", 00:13:24.276 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:24.276 "is_configured": true, 00:13:24.276 "data_offset": 2048, 00:13:24.276 "data_size": 63488 00:13:24.276 } 00:13:24.276 ] 00:13:24.276 }' 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.276 "name": "raid_bdev1", 00:13:24.276 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:24.276 "strip_size_kb": 0, 00:13:24.276 "state": "online", 00:13:24.276 "raid_level": "raid1", 00:13:24.276 "superblock": true, 00:13:24.276 "num_base_bdevs": 4, 00:13:24.276 "num_base_bdevs_discovered": 3, 00:13:24.276 "num_base_bdevs_operational": 3, 00:13:24.276 "base_bdevs_list": [ 00:13:24.276 { 00:13:24.276 "name": "spare", 00:13:24.276 "uuid": "0d4ae951-d32a-5bb4-9290-6a1ba3fe3062", 00:13:24.276 "is_configured": true, 00:13:24.276 "data_offset": 2048, 00:13:24.276 "data_size": 63488 00:13:24.276 }, 00:13:24.276 { 00:13:24.276 "name": null, 00:13:24.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.276 "is_configured": false, 00:13:24.276 "data_offset": 0, 00:13:24.276 "data_size": 63488 00:13:24.276 }, 00:13:24.276 { 00:13:24.276 "name": "BaseBdev3", 00:13:24.276 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:24.276 "is_configured": true, 00:13:24.276 "data_offset": 2048, 00:13:24.276 "data_size": 63488 00:13:24.276 }, 00:13:24.276 { 00:13:24.276 "name": "BaseBdev4", 00:13:24.276 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:24.276 "is_configured": true, 00:13:24.276 "data_offset": 2048, 00:13:24.276 "data_size": 63488 00:13:24.276 } 00:13:24.276 ] 00:13:24.276 }' 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.276 13:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.845 [2024-11-26 13:26:13.212895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.845 [2024-11-26 13:26:13.213054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.845 [2024-11-26 13:26:13.213263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.845 [2024-11-26 13:26:13.213483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.845 [2024-11-26 13:26:13.213651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:24.845 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:25.104 /dev/nbd0 00:13:25.104 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:25.104 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:25.104 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:25.104 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:25.104 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:25.104 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:25.104 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:25.105 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:25.105 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:25.105 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:25.105 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.105 1+0 records in 00:13:25.105 1+0 records out 00:13:25.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582419 s, 7.0 MB/s 00:13:25.105 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.105 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:25.105 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.105 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:25.105 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:25.105 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:25.105 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:25.105 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:25.364 /dev/nbd1 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.364 1+0 records in 00:13:25.364 1+0 records out 00:13:25.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323103 s, 12.7 MB/s 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:25.364 13:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:25.623 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:25.623 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.623 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:25.623 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.623 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:25.623 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.623 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:25.882 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:25.882 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:25.882 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:25.882 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.882 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.882 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:25.882 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:25.882 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.882 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.882 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.141 [2024-11-26 13:26:14.659548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:26.141 [2024-11-26 13:26:14.659603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.141 [2024-11-26 13:26:14.659630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:26.141 [2024-11-26 13:26:14.659643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.141 [2024-11-26 13:26:14.661928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.141 [2024-11-26 13:26:14.661967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:26.141 [2024-11-26 13:26:14.662059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:26.141 [2024-11-26 13:26:14.662114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.141 [2024-11-26 13:26:14.662279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.141 [2024-11-26 13:26:14.662394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:26.141 spare 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.141 13:26:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.399 [2024-11-26 13:26:14.762491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:26.399 [2024-11-26 13:26:14.762516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:26.399 [2024-11-26 13:26:14.762804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:26.399 [2024-11-26 13:26:14.762986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:26.399 [2024-11-26 13:26:14.763006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:26.399 [2024-11-26 13:26:14.763154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.399 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.399 "name": "raid_bdev1", 00:13:26.399 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:26.399 "strip_size_kb": 0, 00:13:26.399 "state": "online", 00:13:26.399 "raid_level": "raid1", 00:13:26.399 "superblock": true, 00:13:26.399 "num_base_bdevs": 4, 00:13:26.400 "num_base_bdevs_discovered": 3, 00:13:26.400 "num_base_bdevs_operational": 3, 00:13:26.400 "base_bdevs_list": [ 00:13:26.400 { 00:13:26.400 "name": "spare", 00:13:26.400 "uuid": "0d4ae951-d32a-5bb4-9290-6a1ba3fe3062", 00:13:26.400 "is_configured": true, 00:13:26.400 "data_offset": 2048, 00:13:26.400 "data_size": 63488 00:13:26.400 }, 00:13:26.400 { 00:13:26.400 "name": null, 00:13:26.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.400 "is_configured": false, 00:13:26.400 "data_offset": 2048, 00:13:26.400 "data_size": 63488 00:13:26.400 }, 00:13:26.400 { 00:13:26.400 "name": "BaseBdev3", 00:13:26.400 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:26.400 "is_configured": true, 00:13:26.400 "data_offset": 2048, 00:13:26.400 "data_size": 63488 00:13:26.400 }, 00:13:26.400 { 00:13:26.400 "name": "BaseBdev4", 00:13:26.400 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:26.400 "is_configured": true, 00:13:26.400 "data_offset": 2048, 00:13:26.400 "data_size": 63488 00:13:26.400 } 00:13:26.400 ] 00:13:26.400 }' 00:13:26.400 13:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.400 13:26:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.968 "name": "raid_bdev1", 00:13:26.968 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:26.968 "strip_size_kb": 0, 00:13:26.968 "state": "online", 00:13:26.968 "raid_level": "raid1", 00:13:26.968 "superblock": true, 00:13:26.968 "num_base_bdevs": 4, 00:13:26.968 "num_base_bdevs_discovered": 3, 00:13:26.968 "num_base_bdevs_operational": 3, 00:13:26.968 "base_bdevs_list": [ 00:13:26.968 { 00:13:26.968 "name": "spare", 00:13:26.968 "uuid": "0d4ae951-d32a-5bb4-9290-6a1ba3fe3062", 00:13:26.968 "is_configured": true, 00:13:26.968 "data_offset": 2048, 00:13:26.968 "data_size": 63488 00:13:26.968 }, 00:13:26.968 { 00:13:26.968 "name": null, 00:13:26.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.968 "is_configured": false, 00:13:26.968 "data_offset": 2048, 00:13:26.968 "data_size": 63488 00:13:26.968 }, 00:13:26.968 { 00:13:26.968 "name": "BaseBdev3", 00:13:26.968 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:26.968 "is_configured": true, 00:13:26.968 "data_offset": 2048, 00:13:26.968 "data_size": 63488 00:13:26.968 }, 00:13:26.968 { 00:13:26.968 "name": "BaseBdev4", 00:13:26.968 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:26.968 "is_configured": true, 00:13:26.968 "data_offset": 2048, 00:13:26.968 "data_size": 63488 00:13:26.968 } 00:13:26.968 ] 00:13:26.968 }' 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.968 [2024-11-26 13:26:15.475792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.968 13:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.227 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.227 "name": "raid_bdev1", 00:13:27.227 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:27.227 "strip_size_kb": 0, 00:13:27.227 "state": "online", 00:13:27.227 "raid_level": "raid1", 00:13:27.227 "superblock": true, 00:13:27.227 "num_base_bdevs": 4, 00:13:27.227 "num_base_bdevs_discovered": 2, 00:13:27.227 "num_base_bdevs_operational": 2, 00:13:27.227 "base_bdevs_list": [ 00:13:27.227 { 00:13:27.227 "name": null, 00:13:27.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.227 "is_configured": false, 00:13:27.227 "data_offset": 0, 00:13:27.227 "data_size": 63488 00:13:27.227 }, 00:13:27.227 { 00:13:27.227 "name": null, 00:13:27.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.227 "is_configured": false, 00:13:27.227 "data_offset": 2048, 00:13:27.227 "data_size": 63488 00:13:27.227 }, 00:13:27.227 { 00:13:27.227 "name": "BaseBdev3", 00:13:27.227 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:27.227 "is_configured": true, 00:13:27.227 "data_offset": 2048, 00:13:27.227 "data_size": 63488 00:13:27.227 }, 00:13:27.227 { 00:13:27.227 "name": "BaseBdev4", 00:13:27.227 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:27.227 "is_configured": true, 00:13:27.227 "data_offset": 2048, 00:13:27.227 "data_size": 63488 00:13:27.227 } 00:13:27.227 ] 00:13:27.227 }' 00:13:27.227 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.227 13:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.487 13:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:27.487 13:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.487 13:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.487 [2024-11-26 13:26:15.991872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:27.487 [2024-11-26 13:26:15.992005] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:27.487 [2024-11-26 13:26:15.992026] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:27.487 [2024-11-26 13:26:15.992064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:27.487 [2024-11-26 13:26:16.002724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:27.487 13:26:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.487 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:27.487 [2024-11-26 13:26:16.004852] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.867 "name": "raid_bdev1", 00:13:28.867 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:28.867 "strip_size_kb": 0, 00:13:28.867 "state": "online", 00:13:28.867 "raid_level": "raid1", 00:13:28.867 "superblock": true, 00:13:28.867 "num_base_bdevs": 4, 00:13:28.867 "num_base_bdevs_discovered": 3, 00:13:28.867 "num_base_bdevs_operational": 3, 00:13:28.867 "process": { 00:13:28.867 "type": "rebuild", 00:13:28.867 "target": "spare", 00:13:28.867 "progress": { 00:13:28.867 "blocks": 20480, 00:13:28.867 "percent": 32 00:13:28.867 } 00:13:28.867 }, 00:13:28.867 "base_bdevs_list": [ 00:13:28.867 { 00:13:28.867 "name": "spare", 00:13:28.867 "uuid": "0d4ae951-d32a-5bb4-9290-6a1ba3fe3062", 00:13:28.867 "is_configured": true, 00:13:28.867 "data_offset": 2048, 00:13:28.867 "data_size": 63488 00:13:28.867 }, 00:13:28.867 { 00:13:28.867 "name": null, 00:13:28.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.867 "is_configured": false, 00:13:28.867 "data_offset": 2048, 00:13:28.867 "data_size": 63488 00:13:28.867 }, 00:13:28.867 { 00:13:28.867 "name": "BaseBdev3", 00:13:28.867 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:28.867 "is_configured": true, 00:13:28.867 "data_offset": 2048, 00:13:28.867 "data_size": 63488 00:13:28.867 }, 00:13:28.867 { 00:13:28.867 "name": "BaseBdev4", 00:13:28.867 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:28.867 "is_configured": true, 00:13:28.867 "data_offset": 2048, 00:13:28.867 "data_size": 63488 00:13:28.867 } 00:13:28.867 ] 00:13:28.867 }' 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.867 [2024-11-26 13:26:17.174682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.867 [2024-11-26 13:26:17.212341] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:28.867 [2024-11-26 13:26:17.212399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.867 [2024-11-26 13:26:17.212422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.867 [2024-11-26 13:26:17.212431] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.867 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.867 "name": "raid_bdev1", 00:13:28.868 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:28.868 "strip_size_kb": 0, 00:13:28.868 "state": "online", 00:13:28.868 "raid_level": "raid1", 00:13:28.868 "superblock": true, 00:13:28.868 "num_base_bdevs": 4, 00:13:28.868 "num_base_bdevs_discovered": 2, 00:13:28.868 "num_base_bdevs_operational": 2, 00:13:28.868 "base_bdevs_list": [ 00:13:28.868 { 00:13:28.868 "name": null, 00:13:28.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.868 "is_configured": false, 00:13:28.868 "data_offset": 0, 00:13:28.868 "data_size": 63488 00:13:28.868 }, 00:13:28.868 { 00:13:28.868 "name": null, 00:13:28.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.868 "is_configured": false, 00:13:28.868 "data_offset": 2048, 00:13:28.868 "data_size": 63488 00:13:28.868 }, 00:13:28.868 { 00:13:28.868 "name": "BaseBdev3", 00:13:28.868 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:28.868 "is_configured": true, 00:13:28.868 "data_offset": 2048, 00:13:28.868 "data_size": 63488 00:13:28.868 }, 00:13:28.868 { 00:13:28.868 "name": "BaseBdev4", 00:13:28.868 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:28.868 "is_configured": true, 00:13:28.868 "data_offset": 2048, 00:13:28.868 "data_size": 63488 00:13:28.868 } 00:13:28.868 ] 00:13:28.868 }' 00:13:28.868 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.868 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.435 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:29.435 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.435 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.435 [2024-11-26 13:26:17.738793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:29.435 [2024-11-26 13:26:17.739001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.435 [2024-11-26 13:26:17.739047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:29.435 [2024-11-26 13:26:17.739062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.435 [2024-11-26 13:26:17.739618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.435 [2024-11-26 13:26:17.739647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:29.435 [2024-11-26 13:26:17.739739] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:29.435 [2024-11-26 13:26:17.739755] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:29.435 [2024-11-26 13:26:17.739774] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:29.435 [2024-11-26 13:26:17.739804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:29.435 [2024-11-26 13:26:17.749361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:29.435 spare 00:13:29.435 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.435 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:29.435 [2024-11-26 13:26:17.751718] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.374 "name": "raid_bdev1", 00:13:30.374 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:30.374 "strip_size_kb": 0, 00:13:30.374 "state": "online", 00:13:30.374 "raid_level": "raid1", 00:13:30.374 "superblock": true, 00:13:30.374 "num_base_bdevs": 4, 00:13:30.374 "num_base_bdevs_discovered": 3, 00:13:30.374 "num_base_bdevs_operational": 3, 00:13:30.374 "process": { 00:13:30.374 "type": "rebuild", 00:13:30.374 "target": "spare", 00:13:30.374 "progress": { 00:13:30.374 "blocks": 20480, 00:13:30.374 "percent": 32 00:13:30.374 } 00:13:30.374 }, 00:13:30.374 "base_bdevs_list": [ 00:13:30.374 { 00:13:30.374 "name": "spare", 00:13:30.374 "uuid": "0d4ae951-d32a-5bb4-9290-6a1ba3fe3062", 00:13:30.374 "is_configured": true, 00:13:30.374 "data_offset": 2048, 00:13:30.374 "data_size": 63488 00:13:30.374 }, 00:13:30.374 { 00:13:30.374 "name": null, 00:13:30.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.374 "is_configured": false, 00:13:30.374 "data_offset": 2048, 00:13:30.374 "data_size": 63488 00:13:30.374 }, 00:13:30.374 { 00:13:30.374 "name": "BaseBdev3", 00:13:30.374 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:30.374 "is_configured": true, 00:13:30.374 "data_offset": 2048, 00:13:30.374 "data_size": 63488 00:13:30.374 }, 00:13:30.374 { 00:13:30.374 "name": "BaseBdev4", 00:13:30.374 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:30.374 "is_configured": true, 00:13:30.374 "data_offset": 2048, 00:13:30.374 "data_size": 63488 00:13:30.374 } 00:13:30.374 ] 00:13:30.374 }' 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.374 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.374 [2024-11-26 13:26:18.917506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.633 [2024-11-26 13:26:18.957978] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:30.633 [2024-11-26 13:26:18.958199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.633 [2024-11-26 13:26:18.958357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.633 [2024-11-26 13:26:18.958410] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.633 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.633 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.633 "name": "raid_bdev1", 00:13:30.633 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:30.633 "strip_size_kb": 0, 00:13:30.633 "state": "online", 00:13:30.633 "raid_level": "raid1", 00:13:30.633 "superblock": true, 00:13:30.633 "num_base_bdevs": 4, 00:13:30.633 "num_base_bdevs_discovered": 2, 00:13:30.633 "num_base_bdevs_operational": 2, 00:13:30.633 "base_bdevs_list": [ 00:13:30.633 { 00:13:30.633 "name": null, 00:13:30.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.633 "is_configured": false, 00:13:30.633 "data_offset": 0, 00:13:30.633 "data_size": 63488 00:13:30.633 }, 00:13:30.633 { 00:13:30.633 "name": null, 00:13:30.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.633 "is_configured": false, 00:13:30.633 "data_offset": 2048, 00:13:30.633 "data_size": 63488 00:13:30.633 }, 00:13:30.633 { 00:13:30.633 "name": "BaseBdev3", 00:13:30.633 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:30.633 "is_configured": true, 00:13:30.633 "data_offset": 2048, 00:13:30.633 "data_size": 63488 00:13:30.633 }, 00:13:30.633 { 00:13:30.633 "name": "BaseBdev4", 00:13:30.633 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:30.633 "is_configured": true, 00:13:30.633 "data_offset": 2048, 00:13:30.633 "data_size": 63488 00:13:30.633 } 00:13:30.633 ] 00:13:30.633 }' 00:13:30.633 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.634 13:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.200 "name": "raid_bdev1", 00:13:31.200 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:31.200 "strip_size_kb": 0, 00:13:31.200 "state": "online", 00:13:31.200 "raid_level": "raid1", 00:13:31.200 "superblock": true, 00:13:31.200 "num_base_bdevs": 4, 00:13:31.200 "num_base_bdevs_discovered": 2, 00:13:31.200 "num_base_bdevs_operational": 2, 00:13:31.200 "base_bdevs_list": [ 00:13:31.200 { 00:13:31.200 "name": null, 00:13:31.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.200 "is_configured": false, 00:13:31.200 "data_offset": 0, 00:13:31.200 "data_size": 63488 00:13:31.200 }, 00:13:31.200 { 00:13:31.200 "name": null, 00:13:31.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.200 "is_configured": false, 00:13:31.200 "data_offset": 2048, 00:13:31.200 "data_size": 63488 00:13:31.200 }, 00:13:31.200 { 00:13:31.200 "name": "BaseBdev3", 00:13:31.200 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:31.200 "is_configured": true, 00:13:31.200 "data_offset": 2048, 00:13:31.200 "data_size": 63488 00:13:31.200 }, 00:13:31.200 { 00:13:31.200 "name": "BaseBdev4", 00:13:31.200 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:31.200 "is_configured": true, 00:13:31.200 "data_offset": 2048, 00:13:31.200 "data_size": 63488 00:13:31.200 } 00:13:31.200 ] 00:13:31.200 }' 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.200 [2024-11-26 13:26:19.664936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:31.200 [2024-11-26 13:26:19.664995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.200 [2024-11-26 13:26:19.665021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:31.200 [2024-11-26 13:26:19.665036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.200 [2024-11-26 13:26:19.665523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.200 [2024-11-26 13:26:19.665559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:31.200 [2024-11-26 13:26:19.665654] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:31.200 [2024-11-26 13:26:19.665684] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:31.200 [2024-11-26 13:26:19.665695] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:31.200 [2024-11-26 13:26:19.665719] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:31.200 BaseBdev1 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.200 13:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:32.136 13:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:32.136 13:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.136 13:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.136 13:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.136 13:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.136 13:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:32.136 13:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.136 13:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.136 13:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.136 13:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.136 13:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.136 13:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.136 13:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.136 13:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.136 13:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.395 13:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.395 "name": "raid_bdev1", 00:13:32.395 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:32.395 "strip_size_kb": 0, 00:13:32.395 "state": "online", 00:13:32.395 "raid_level": "raid1", 00:13:32.395 "superblock": true, 00:13:32.395 "num_base_bdevs": 4, 00:13:32.395 "num_base_bdevs_discovered": 2, 00:13:32.395 "num_base_bdevs_operational": 2, 00:13:32.395 "base_bdevs_list": [ 00:13:32.395 { 00:13:32.395 "name": null, 00:13:32.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.395 "is_configured": false, 00:13:32.395 "data_offset": 0, 00:13:32.395 "data_size": 63488 00:13:32.395 }, 00:13:32.395 { 00:13:32.395 "name": null, 00:13:32.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.395 "is_configured": false, 00:13:32.395 "data_offset": 2048, 00:13:32.395 "data_size": 63488 00:13:32.395 }, 00:13:32.395 { 00:13:32.395 "name": "BaseBdev3", 00:13:32.395 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:32.395 "is_configured": true, 00:13:32.395 "data_offset": 2048, 00:13:32.395 "data_size": 63488 00:13:32.395 }, 00:13:32.395 { 00:13:32.395 "name": "BaseBdev4", 00:13:32.395 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:32.395 "is_configured": true, 00:13:32.395 "data_offset": 2048, 00:13:32.395 "data_size": 63488 00:13:32.395 } 00:13:32.395 ] 00:13:32.395 }' 00:13:32.395 13:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.395 13:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.654 13:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:32.654 13:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.654 13:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:32.654 13:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:32.654 13:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.654 13:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.654 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.654 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.654 13:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.654 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.913 13:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.913 "name": "raid_bdev1", 00:13:32.913 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:32.913 "strip_size_kb": 0, 00:13:32.913 "state": "online", 00:13:32.913 "raid_level": "raid1", 00:13:32.913 "superblock": true, 00:13:32.913 "num_base_bdevs": 4, 00:13:32.913 "num_base_bdevs_discovered": 2, 00:13:32.913 "num_base_bdevs_operational": 2, 00:13:32.913 "base_bdevs_list": [ 00:13:32.913 { 00:13:32.913 "name": null, 00:13:32.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.913 "is_configured": false, 00:13:32.913 "data_offset": 0, 00:13:32.913 "data_size": 63488 00:13:32.913 }, 00:13:32.913 { 00:13:32.913 "name": null, 00:13:32.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.913 "is_configured": false, 00:13:32.913 "data_offset": 2048, 00:13:32.913 "data_size": 63488 00:13:32.913 }, 00:13:32.913 { 00:13:32.913 "name": "BaseBdev3", 00:13:32.913 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:32.913 "is_configured": true, 00:13:32.913 "data_offset": 2048, 00:13:32.913 "data_size": 63488 00:13:32.913 }, 00:13:32.913 { 00:13:32.913 "name": "BaseBdev4", 00:13:32.913 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:32.913 "is_configured": true, 00:13:32.913 "data_offset": 2048, 00:13:32.913 "data_size": 63488 00:13:32.914 } 00:13:32.914 ] 00:13:32.914 }' 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.914 [2024-11-26 13:26:21.365291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:32.914 [2024-11-26 13:26:21.365447] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:32.914 [2024-11-26 13:26:21.365466] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:32.914 request: 00:13:32.914 { 00:13:32.914 "base_bdev": "BaseBdev1", 00:13:32.914 "raid_bdev": "raid_bdev1", 00:13:32.914 "method": "bdev_raid_add_base_bdev", 00:13:32.914 "req_id": 1 00:13:32.914 } 00:13:32.914 Got JSON-RPC error response 00:13:32.914 response: 00:13:32.914 { 00:13:32.914 "code": -22, 00:13:32.914 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:32.914 } 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:32.914 13:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:33.851 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:33.851 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.851 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.851 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.851 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.851 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.851 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.851 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.851 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.851 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.851 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.851 13:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.851 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.851 13:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.851 13:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.110 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.110 "name": "raid_bdev1", 00:13:34.110 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:34.110 "strip_size_kb": 0, 00:13:34.110 "state": "online", 00:13:34.110 "raid_level": "raid1", 00:13:34.110 "superblock": true, 00:13:34.110 "num_base_bdevs": 4, 00:13:34.110 "num_base_bdevs_discovered": 2, 00:13:34.110 "num_base_bdevs_operational": 2, 00:13:34.110 "base_bdevs_list": [ 00:13:34.110 { 00:13:34.110 "name": null, 00:13:34.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.110 "is_configured": false, 00:13:34.110 "data_offset": 0, 00:13:34.110 "data_size": 63488 00:13:34.110 }, 00:13:34.110 { 00:13:34.110 "name": null, 00:13:34.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.110 "is_configured": false, 00:13:34.110 "data_offset": 2048, 00:13:34.110 "data_size": 63488 00:13:34.110 }, 00:13:34.110 { 00:13:34.110 "name": "BaseBdev3", 00:13:34.110 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:34.110 "is_configured": true, 00:13:34.110 "data_offset": 2048, 00:13:34.110 "data_size": 63488 00:13:34.110 }, 00:13:34.110 { 00:13:34.110 "name": "BaseBdev4", 00:13:34.110 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:34.110 "is_configured": true, 00:13:34.110 "data_offset": 2048, 00:13:34.110 "data_size": 63488 00:13:34.110 } 00:13:34.110 ] 00:13:34.110 }' 00:13:34.110 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.110 13:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.369 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:34.369 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.369 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:34.369 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:34.369 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.369 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.369 13:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.369 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.369 13:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.369 13:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.628 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.628 "name": "raid_bdev1", 00:13:34.628 "uuid": "ec8328bc-cffa-4549-b867-99a5adccdb0b", 00:13:34.628 "strip_size_kb": 0, 00:13:34.628 "state": "online", 00:13:34.628 "raid_level": "raid1", 00:13:34.628 "superblock": true, 00:13:34.628 "num_base_bdevs": 4, 00:13:34.628 "num_base_bdevs_discovered": 2, 00:13:34.628 "num_base_bdevs_operational": 2, 00:13:34.628 "base_bdevs_list": [ 00:13:34.628 { 00:13:34.628 "name": null, 00:13:34.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.628 "is_configured": false, 00:13:34.628 "data_offset": 0, 00:13:34.628 "data_size": 63488 00:13:34.628 }, 00:13:34.628 { 00:13:34.628 "name": null, 00:13:34.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.628 "is_configured": false, 00:13:34.628 "data_offset": 2048, 00:13:34.628 "data_size": 63488 00:13:34.628 }, 00:13:34.628 { 00:13:34.628 "name": "BaseBdev3", 00:13:34.628 "uuid": "115e7640-5c09-581b-9f4e-11d5259f746b", 00:13:34.628 "is_configured": true, 00:13:34.628 "data_offset": 2048, 00:13:34.628 "data_size": 63488 00:13:34.628 }, 00:13:34.628 { 00:13:34.628 "name": "BaseBdev4", 00:13:34.628 "uuid": "51b6c744-ce79-5671-9878-5170ef3e8b3a", 00:13:34.628 "is_configured": true, 00:13:34.628 "data_offset": 2048, 00:13:34.628 "data_size": 63488 00:13:34.628 } 00:13:34.628 ] 00:13:34.628 }' 00:13:34.628 13:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.628 13:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:34.628 13:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.628 13:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:34.628 13:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77585 00:13:34.628 13:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77585 ']' 00:13:34.628 13:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77585 00:13:34.628 13:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:34.628 13:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:34.628 13:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77585 00:13:34.628 killing process with pid 77585 00:13:34.628 Received shutdown signal, test time was about 60.000000 seconds 00:13:34.628 00:13:34.628 Latency(us) 00:13:34.628 [2024-11-26T13:26:23.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.628 [2024-11-26T13:26:23.198Z] =================================================================================================================== 00:13:34.628 [2024-11-26T13:26:23.198Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:34.628 13:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:34.628 13:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:34.628 13:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77585' 00:13:34.628 13:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77585 00:13:34.628 [2024-11-26 13:26:23.103733] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:34.628 [2024-11-26 13:26:23.103826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.628 13:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77585 00:13:34.628 [2024-11-26 13:26:23.103893] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:34.628 [2024-11-26 13:26:23.103907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:34.887 [2024-11-26 13:26:23.433778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:35.820 13:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:35.820 00:13:35.820 real 0m27.167s 00:13:35.820 user 0m33.378s 00:13:35.820 sys 0m3.733s 00:13:35.820 ************************************ 00:13:35.820 END TEST raid_rebuild_test_sb 00:13:35.820 ************************************ 00:13:35.820 13:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.820 13:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.820 13:26:24 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:35.820 13:26:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:35.820 13:26:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.820 13:26:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:35.820 ************************************ 00:13:35.820 START TEST raid_rebuild_test_io 00:13:35.820 ************************************ 00:13:35.820 13:26:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:35.820 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:35.820 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:35.820 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:35.820 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:35.820 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78362 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:35.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78362 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78362 ']' 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.821 13:26:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.080 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:36.080 Zero copy mechanism will not be used. 00:13:36.080 [2024-11-26 13:26:24.461284] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:13:36.080 [2024-11-26 13:26:24.461476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78362 ] 00:13:36.080 [2024-11-26 13:26:24.623582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.339 [2024-11-26 13:26:24.735400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.598 [2024-11-26 13:26:24.926199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.598 [2024-11-26 13:26:24.926255] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.858 BaseBdev1_malloc 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.858 [2024-11-26 13:26:25.364553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:36.858 [2024-11-26 13:26:25.364641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.858 [2024-11-26 13:26:25.364674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:36.858 [2024-11-26 13:26:25.364691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.858 [2024-11-26 13:26:25.367169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.858 [2024-11-26 13:26:25.367502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:36.858 BaseBdev1 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.858 BaseBdev2_malloc 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.858 [2024-11-26 13:26:25.414272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:36.858 [2024-11-26 13:26:25.414542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.858 [2024-11-26 13:26:25.414578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:36.858 [2024-11-26 13:26:25.414597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.858 [2024-11-26 13:26:25.417105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.858 [2024-11-26 13:26:25.417150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:36.858 BaseBdev2 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.858 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.119 BaseBdev3_malloc 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.119 [2024-11-26 13:26:25.468651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:37.119 [2024-11-26 13:26:25.468713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.119 [2024-11-26 13:26:25.468744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:37.119 [2024-11-26 13:26:25.468760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.119 [2024-11-26 13:26:25.471211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.119 [2024-11-26 13:26:25.471270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:37.119 BaseBdev3 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.119 BaseBdev4_malloc 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.119 [2024-11-26 13:26:25.514735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:37.119 [2024-11-26 13:26:25.514804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.119 [2024-11-26 13:26:25.514830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:37.119 [2024-11-26 13:26:25.514846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.119 [2024-11-26 13:26:25.517305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.119 [2024-11-26 13:26:25.517350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:37.119 BaseBdev4 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.119 spare_malloc 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.119 spare_delay 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.119 [2024-11-26 13:26:25.572027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:37.119 [2024-11-26 13:26:25.572090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.119 [2024-11-26 13:26:25.572116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:37.119 [2024-11-26 13:26:25.572132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.119 [2024-11-26 13:26:25.574597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.119 [2024-11-26 13:26:25.574903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:37.119 spare 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.119 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.119 [2024-11-26 13:26:25.580080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:37.119 [2024-11-26 13:26:25.582244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:37.119 [2024-11-26 13:26:25.582333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:37.119 [2024-11-26 13:26:25.582407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:37.119 [2024-11-26 13:26:25.582506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:37.119 [2024-11-26 13:26:25.582527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:37.119 [2024-11-26 13:26:25.582808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:37.119 [2024-11-26 13:26:25.583030] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:37.120 [2024-11-26 13:26:25.583049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:37.120 [2024-11-26 13:26:25.583228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.120 "name": "raid_bdev1", 00:13:37.120 "uuid": "9ce4a1d7-3cc1-4a72-8f9d-2215e92cbbf7", 00:13:37.120 "strip_size_kb": 0, 00:13:37.120 "state": "online", 00:13:37.120 "raid_level": "raid1", 00:13:37.120 "superblock": false, 00:13:37.120 "num_base_bdevs": 4, 00:13:37.120 "num_base_bdevs_discovered": 4, 00:13:37.120 "num_base_bdevs_operational": 4, 00:13:37.120 "base_bdevs_list": [ 00:13:37.120 { 00:13:37.120 "name": "BaseBdev1", 00:13:37.120 "uuid": "70d4a68d-13dc-544f-b64f-64d20b27097d", 00:13:37.120 "is_configured": true, 00:13:37.120 "data_offset": 0, 00:13:37.120 "data_size": 65536 00:13:37.120 }, 00:13:37.120 { 00:13:37.120 "name": "BaseBdev2", 00:13:37.120 "uuid": "9a5b2a1c-be24-54c0-87f7-37a7f8d037af", 00:13:37.120 "is_configured": true, 00:13:37.120 "data_offset": 0, 00:13:37.120 "data_size": 65536 00:13:37.120 }, 00:13:37.120 { 00:13:37.120 "name": "BaseBdev3", 00:13:37.120 "uuid": "77643eb3-22ee-5673-a251-603153b8801d", 00:13:37.120 "is_configured": true, 00:13:37.120 "data_offset": 0, 00:13:37.120 "data_size": 65536 00:13:37.120 }, 00:13:37.120 { 00:13:37.120 "name": "BaseBdev4", 00:13:37.120 "uuid": "b7585d4b-3d78-5ee5-8294-3786d2dc3635", 00:13:37.120 "is_configured": true, 00:13:37.120 "data_offset": 0, 00:13:37.120 "data_size": 65536 00:13:37.120 } 00:13:37.120 ] 00:13:37.120 }' 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.120 13:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.690 [2024-11-26 13:26:26.060507] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.690 [2024-11-26 13:26:26.160134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.690 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.690 "name": "raid_bdev1", 00:13:37.690 "uuid": "9ce4a1d7-3cc1-4a72-8f9d-2215e92cbbf7", 00:13:37.690 "strip_size_kb": 0, 00:13:37.690 "state": "online", 00:13:37.690 "raid_level": "raid1", 00:13:37.690 "superblock": false, 00:13:37.690 "num_base_bdevs": 4, 00:13:37.691 "num_base_bdevs_discovered": 3, 00:13:37.691 "num_base_bdevs_operational": 3, 00:13:37.691 "base_bdevs_list": [ 00:13:37.691 { 00:13:37.691 "name": null, 00:13:37.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.691 "is_configured": false, 00:13:37.691 "data_offset": 0, 00:13:37.691 "data_size": 65536 00:13:37.691 }, 00:13:37.691 { 00:13:37.691 "name": "BaseBdev2", 00:13:37.691 "uuid": "9a5b2a1c-be24-54c0-87f7-37a7f8d037af", 00:13:37.691 "is_configured": true, 00:13:37.691 "data_offset": 0, 00:13:37.691 "data_size": 65536 00:13:37.691 }, 00:13:37.691 { 00:13:37.691 "name": "BaseBdev3", 00:13:37.691 "uuid": "77643eb3-22ee-5673-a251-603153b8801d", 00:13:37.691 "is_configured": true, 00:13:37.691 "data_offset": 0, 00:13:37.691 "data_size": 65536 00:13:37.691 }, 00:13:37.691 { 00:13:37.691 "name": "BaseBdev4", 00:13:37.691 "uuid": "b7585d4b-3d78-5ee5-8294-3786d2dc3635", 00:13:37.691 "is_configured": true, 00:13:37.691 "data_offset": 0, 00:13:37.691 "data_size": 65536 00:13:37.691 } 00:13:37.691 ] 00:13:37.691 }' 00:13:37.691 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.691 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.951 [2024-11-26 13:26:26.260312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:37.951 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:37.951 Zero copy mechanism will not be used. 00:13:37.951 Running I/O for 60 seconds... 00:13:38.210 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:38.210 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.210 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.210 [2024-11-26 13:26:26.656857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.210 13:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.210 13:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:38.210 [2024-11-26 13:26:26.725197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:38.210 [2024-11-26 13:26:26.727672] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.470 [2024-11-26 13:26:26.830453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:38.470 [2024-11-26 13:26:26.830959] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:38.470 [2024-11-26 13:26:26.973981] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:38.988 156.00 IOPS, 468.00 MiB/s [2024-11-26T13:26:27.558Z] [2024-11-26 13:26:27.297987] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:38.988 [2024-11-26 13:26:27.516281] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:38.988 [2024-11-26 13:26:27.516660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:39.248 13:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.248 13:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.248 13:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.248 13:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.248 13:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.248 13:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.248 13:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.248 13:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.248 13:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.248 13:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.248 13:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.248 "name": "raid_bdev1", 00:13:39.248 "uuid": "9ce4a1d7-3cc1-4a72-8f9d-2215e92cbbf7", 00:13:39.248 "strip_size_kb": 0, 00:13:39.248 "state": "online", 00:13:39.248 "raid_level": "raid1", 00:13:39.248 "superblock": false, 00:13:39.248 "num_base_bdevs": 4, 00:13:39.248 "num_base_bdevs_discovered": 4, 00:13:39.248 "num_base_bdevs_operational": 4, 00:13:39.248 "process": { 00:13:39.248 "type": "rebuild", 00:13:39.248 "target": "spare", 00:13:39.248 "progress": { 00:13:39.248 "blocks": 10240, 00:13:39.248 "percent": 15 00:13:39.248 } 00:13:39.248 }, 00:13:39.248 "base_bdevs_list": [ 00:13:39.248 { 00:13:39.248 "name": "spare", 00:13:39.248 "uuid": "461d98bb-50cc-5101-b72d-fc4531fd6d47", 00:13:39.248 "is_configured": true, 00:13:39.248 "data_offset": 0, 00:13:39.248 "data_size": 65536 00:13:39.248 }, 00:13:39.248 { 00:13:39.248 "name": "BaseBdev2", 00:13:39.248 "uuid": "9a5b2a1c-be24-54c0-87f7-37a7f8d037af", 00:13:39.248 "is_configured": true, 00:13:39.248 "data_offset": 0, 00:13:39.248 "data_size": 65536 00:13:39.248 }, 00:13:39.248 { 00:13:39.248 "name": "BaseBdev3", 00:13:39.248 "uuid": "77643eb3-22ee-5673-a251-603153b8801d", 00:13:39.248 "is_configured": true, 00:13:39.248 "data_offset": 0, 00:13:39.248 "data_size": 65536 00:13:39.248 }, 00:13:39.248 { 00:13:39.248 "name": "BaseBdev4", 00:13:39.248 "uuid": "b7585d4b-3d78-5ee5-8294-3786d2dc3635", 00:13:39.248 "is_configured": true, 00:13:39.248 "data_offset": 0, 00:13:39.248 "data_size": 65536 00:13:39.248 } 00:13:39.248 ] 00:13:39.248 }' 00:13:39.248 13:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.248 13:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.508 13:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.508 13:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.508 13:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:39.508 13:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.508 13:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.508 [2024-11-26 13:26:27.869468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.508 [2024-11-26 13:26:27.887504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:39.508 [2024-11-26 13:26:27.989516] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:39.508 [2024-11-26 13:26:28.007969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.508 [2024-11-26 13:26:28.008329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.508 [2024-11-26 13:26:28.008359] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:39.508 [2024-11-26 13:26:28.036422] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:39.508 13:26:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.508 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:39.508 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.508 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.508 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.508 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.508 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.508 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.508 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.508 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.508 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.508 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.508 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.508 13:26:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.508 13:26:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.767 13:26:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.767 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.767 "name": "raid_bdev1", 00:13:39.767 "uuid": "9ce4a1d7-3cc1-4a72-8f9d-2215e92cbbf7", 00:13:39.767 "strip_size_kb": 0, 00:13:39.767 "state": "online", 00:13:39.767 "raid_level": "raid1", 00:13:39.767 "superblock": false, 00:13:39.767 "num_base_bdevs": 4, 00:13:39.767 "num_base_bdevs_discovered": 3, 00:13:39.767 "num_base_bdevs_operational": 3, 00:13:39.767 "base_bdevs_list": [ 00:13:39.768 { 00:13:39.768 "name": null, 00:13:39.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.768 "is_configured": false, 00:13:39.768 "data_offset": 0, 00:13:39.768 "data_size": 65536 00:13:39.768 }, 00:13:39.768 { 00:13:39.768 "name": "BaseBdev2", 00:13:39.768 "uuid": "9a5b2a1c-be24-54c0-87f7-37a7f8d037af", 00:13:39.768 "is_configured": true, 00:13:39.768 "data_offset": 0, 00:13:39.768 "data_size": 65536 00:13:39.768 }, 00:13:39.768 { 00:13:39.768 "name": "BaseBdev3", 00:13:39.768 "uuid": "77643eb3-22ee-5673-a251-603153b8801d", 00:13:39.768 "is_configured": true, 00:13:39.768 "data_offset": 0, 00:13:39.768 "data_size": 65536 00:13:39.768 }, 00:13:39.768 { 00:13:39.768 "name": "BaseBdev4", 00:13:39.768 "uuid": "b7585d4b-3d78-5ee5-8294-3786d2dc3635", 00:13:39.768 "is_configured": true, 00:13:39.768 "data_offset": 0, 00:13:39.768 "data_size": 65536 00:13:39.768 } 00:13:39.768 ] 00:13:39.768 }' 00:13:39.768 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.768 13:26:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.028 118.50 IOPS, 355.50 MiB/s [2024-11-26T13:26:28.598Z] 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:40.028 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.028 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:40.028 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:40.028 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.028 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.028 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.028 13:26:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.028 13:26:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.288 13:26:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.288 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.288 "name": "raid_bdev1", 00:13:40.288 "uuid": "9ce4a1d7-3cc1-4a72-8f9d-2215e92cbbf7", 00:13:40.288 "strip_size_kb": 0, 00:13:40.288 "state": "online", 00:13:40.288 "raid_level": "raid1", 00:13:40.288 "superblock": false, 00:13:40.288 "num_base_bdevs": 4, 00:13:40.288 "num_base_bdevs_discovered": 3, 00:13:40.288 "num_base_bdevs_operational": 3, 00:13:40.288 "base_bdevs_list": [ 00:13:40.288 { 00:13:40.288 "name": null, 00:13:40.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.288 "is_configured": false, 00:13:40.288 "data_offset": 0, 00:13:40.288 "data_size": 65536 00:13:40.288 }, 00:13:40.288 { 00:13:40.288 "name": "BaseBdev2", 00:13:40.288 "uuid": "9a5b2a1c-be24-54c0-87f7-37a7f8d037af", 00:13:40.288 "is_configured": true, 00:13:40.288 "data_offset": 0, 00:13:40.288 "data_size": 65536 00:13:40.288 }, 00:13:40.288 { 00:13:40.288 "name": "BaseBdev3", 00:13:40.288 "uuid": "77643eb3-22ee-5673-a251-603153b8801d", 00:13:40.288 "is_configured": true, 00:13:40.288 "data_offset": 0, 00:13:40.288 "data_size": 65536 00:13:40.288 }, 00:13:40.288 { 00:13:40.288 "name": "BaseBdev4", 00:13:40.288 "uuid": "b7585d4b-3d78-5ee5-8294-3786d2dc3635", 00:13:40.288 "is_configured": true, 00:13:40.288 "data_offset": 0, 00:13:40.288 "data_size": 65536 00:13:40.288 } 00:13:40.288 ] 00:13:40.288 }' 00:13:40.288 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.288 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:40.288 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.288 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:40.288 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:40.288 13:26:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.288 13:26:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.288 [2024-11-26 13:26:28.743493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:40.288 13:26:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.288 13:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:40.288 [2024-11-26 13:26:28.811418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:40.288 [2024-11-26 13:26:28.813823] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:40.548 [2024-11-26 13:26:28.922415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:40.548 [2024-11-26 13:26:28.924899] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:40.809 [2024-11-26 13:26:29.143978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:40.809 [2024-11-26 13:26:29.145301] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:41.067 135.00 IOPS, 405.00 MiB/s [2024-11-26T13:26:29.637Z] [2024-11-26 13:26:29.491057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:41.326 [2024-11-26 13:26:29.718982] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:41.326 [2024-11-26 13:26:29.719483] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:41.326 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.326 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.326 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.326 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.326 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.326 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.326 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.327 13:26:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.327 13:26:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.327 13:26:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.327 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.327 "name": "raid_bdev1", 00:13:41.327 "uuid": "9ce4a1d7-3cc1-4a72-8f9d-2215e92cbbf7", 00:13:41.327 "strip_size_kb": 0, 00:13:41.327 "state": "online", 00:13:41.327 "raid_level": "raid1", 00:13:41.327 "superblock": false, 00:13:41.327 "num_base_bdevs": 4, 00:13:41.327 "num_base_bdevs_discovered": 4, 00:13:41.327 "num_base_bdevs_operational": 4, 00:13:41.327 "process": { 00:13:41.327 "type": "rebuild", 00:13:41.327 "target": "spare", 00:13:41.327 "progress": { 00:13:41.327 "blocks": 10240, 00:13:41.327 "percent": 15 00:13:41.327 } 00:13:41.327 }, 00:13:41.327 "base_bdevs_list": [ 00:13:41.327 { 00:13:41.327 "name": "spare", 00:13:41.327 "uuid": "461d98bb-50cc-5101-b72d-fc4531fd6d47", 00:13:41.327 "is_configured": true, 00:13:41.327 "data_offset": 0, 00:13:41.327 "data_size": 65536 00:13:41.327 }, 00:13:41.327 { 00:13:41.327 "name": "BaseBdev2", 00:13:41.327 "uuid": "9a5b2a1c-be24-54c0-87f7-37a7f8d037af", 00:13:41.327 "is_configured": true, 00:13:41.327 "data_offset": 0, 00:13:41.327 "data_size": 65536 00:13:41.327 }, 00:13:41.327 { 00:13:41.327 "name": "BaseBdev3", 00:13:41.327 "uuid": "77643eb3-22ee-5673-a251-603153b8801d", 00:13:41.327 "is_configured": true, 00:13:41.327 "data_offset": 0, 00:13:41.327 "data_size": 65536 00:13:41.327 }, 00:13:41.327 { 00:13:41.327 "name": "BaseBdev4", 00:13:41.327 "uuid": "b7585d4b-3d78-5ee5-8294-3786d2dc3635", 00:13:41.327 "is_configured": true, 00:13:41.327 "data_offset": 0, 00:13:41.327 "data_size": 65536 00:13:41.327 } 00:13:41.327 ] 00:13:41.327 }' 00:13:41.327 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.586 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.586 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.586 [2024-11-26 13:26:29.957075] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:41.586 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.586 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:41.586 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:41.586 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:41.586 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:41.586 13:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:41.586 13:26:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.586 13:26:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.586 [2024-11-26 13:26:29.966483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:41.586 [2024-11-26 13:26:30.074561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:41.586 [2024-11-26 13:26:30.083861] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:41.586 [2024-11-26 13:26:30.083894] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:13:41.586 13:26:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.586 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:41.586 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:41.586 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.586 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.586 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.586 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.586 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.586 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.586 13:26:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.586 13:26:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.586 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.586 13:26:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.586 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.586 "name": "raid_bdev1", 00:13:41.586 "uuid": "9ce4a1d7-3cc1-4a72-8f9d-2215e92cbbf7", 00:13:41.586 "strip_size_kb": 0, 00:13:41.586 "state": "online", 00:13:41.586 "raid_level": "raid1", 00:13:41.586 "superblock": false, 00:13:41.587 "num_base_bdevs": 4, 00:13:41.587 "num_base_bdevs_discovered": 3, 00:13:41.587 "num_base_bdevs_operational": 3, 00:13:41.587 "process": { 00:13:41.587 "type": "rebuild", 00:13:41.587 "target": "spare", 00:13:41.587 "progress": { 00:13:41.587 "blocks": 16384, 00:13:41.587 "percent": 25 00:13:41.587 } 00:13:41.587 }, 00:13:41.587 "base_bdevs_list": [ 00:13:41.587 { 00:13:41.587 "name": "spare", 00:13:41.587 "uuid": "461d98bb-50cc-5101-b72d-fc4531fd6d47", 00:13:41.587 "is_configured": true, 00:13:41.587 "data_offset": 0, 00:13:41.587 "data_size": 65536 00:13:41.587 }, 00:13:41.587 { 00:13:41.587 "name": null, 00:13:41.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.587 "is_configured": false, 00:13:41.587 "data_offset": 0, 00:13:41.587 "data_size": 65536 00:13:41.587 }, 00:13:41.587 { 00:13:41.587 "name": "BaseBdev3", 00:13:41.587 "uuid": "77643eb3-22ee-5673-a251-603153b8801d", 00:13:41.587 "is_configured": true, 00:13:41.587 "data_offset": 0, 00:13:41.587 "data_size": 65536 00:13:41.587 }, 00:13:41.587 { 00:13:41.587 "name": "BaseBdev4", 00:13:41.587 "uuid": "b7585d4b-3d78-5ee5-8294-3786d2dc3635", 00:13:41.587 "is_configured": true, 00:13:41.587 "data_offset": 0, 00:13:41.587 "data_size": 65536 00:13:41.587 } 00:13:41.587 ] 00:13:41.587 }' 00:13:41.587 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=492 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.846 117.75 IOPS, 353.25 MiB/s [2024-11-26T13:26:30.416Z] 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.846 "name": "raid_bdev1", 00:13:41.846 "uuid": "9ce4a1d7-3cc1-4a72-8f9d-2215e92cbbf7", 00:13:41.846 "strip_size_kb": 0, 00:13:41.846 "state": "online", 00:13:41.846 "raid_level": "raid1", 00:13:41.846 "superblock": false, 00:13:41.846 "num_base_bdevs": 4, 00:13:41.846 "num_base_bdevs_discovered": 3, 00:13:41.846 "num_base_bdevs_operational": 3, 00:13:41.846 "process": { 00:13:41.846 "type": "rebuild", 00:13:41.846 "target": "spare", 00:13:41.846 "progress": { 00:13:41.846 "blocks": 18432, 00:13:41.846 "percent": 28 00:13:41.846 } 00:13:41.846 }, 00:13:41.846 "base_bdevs_list": [ 00:13:41.846 { 00:13:41.846 "name": "spare", 00:13:41.846 "uuid": "461d98bb-50cc-5101-b72d-fc4531fd6d47", 00:13:41.846 "is_configured": true, 00:13:41.846 "data_offset": 0, 00:13:41.846 "data_size": 65536 00:13:41.846 }, 00:13:41.846 { 00:13:41.846 "name": null, 00:13:41.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.846 "is_configured": false, 00:13:41.846 "data_offset": 0, 00:13:41.846 "data_size": 65536 00:13:41.846 }, 00:13:41.846 { 00:13:41.846 "name": "BaseBdev3", 00:13:41.846 "uuid": "77643eb3-22ee-5673-a251-603153b8801d", 00:13:41.846 "is_configured": true, 00:13:41.846 "data_offset": 0, 00:13:41.846 "data_size": 65536 00:13:41.846 }, 00:13:41.846 { 00:13:41.846 "name": "BaseBdev4", 00:13:41.846 "uuid": "b7585d4b-3d78-5ee5-8294-3786d2dc3635", 00:13:41.846 "is_configured": true, 00:13:41.846 "data_offset": 0, 00:13:41.846 "data_size": 65536 00:13:41.846 } 00:13:41.846 ] 00:13:41.846 }' 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.846 [2024-11-26 13:26:30.320953] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.846 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.847 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.847 13:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.107 [2024-11-26 13:26:30.461500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:42.367 [2024-11-26 13:26:30.881885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:42.936 [2024-11-26 13:26:31.201115] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:42.936 [2024-11-26 13:26:31.201525] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:42.936 107.20 IOPS, 321.60 MiB/s [2024-11-26T13:26:31.506Z] [2024-11-26 13:26:31.317513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:42.936 [2024-11-26 13:26:31.317858] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:42.936 13:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.936 13:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.936 13:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.936 13:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.936 13:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.936 13:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.936 13:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.936 13:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.936 13:26:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.936 13:26:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.936 13:26:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.936 13:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.936 "name": "raid_bdev1", 00:13:42.936 "uuid": "9ce4a1d7-3cc1-4a72-8f9d-2215e92cbbf7", 00:13:42.936 "strip_size_kb": 0, 00:13:42.936 "state": "online", 00:13:42.936 "raid_level": "raid1", 00:13:42.936 "superblock": false, 00:13:42.936 "num_base_bdevs": 4, 00:13:42.936 "num_base_bdevs_discovered": 3, 00:13:42.936 "num_base_bdevs_operational": 3, 00:13:42.936 "process": { 00:13:42.936 "type": "rebuild", 00:13:42.936 "target": "spare", 00:13:42.936 "progress": { 00:13:42.936 "blocks": 34816, 00:13:42.936 "percent": 53 00:13:42.936 } 00:13:42.936 }, 00:13:42.936 "base_bdevs_list": [ 00:13:42.936 { 00:13:42.936 "name": "spare", 00:13:42.936 "uuid": "461d98bb-50cc-5101-b72d-fc4531fd6d47", 00:13:42.936 "is_configured": true, 00:13:42.936 "data_offset": 0, 00:13:42.936 "data_size": 65536 00:13:42.936 }, 00:13:42.936 { 00:13:42.936 "name": null, 00:13:42.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.936 "is_configured": false, 00:13:42.936 "data_offset": 0, 00:13:42.936 "data_size": 65536 00:13:42.936 }, 00:13:42.936 { 00:13:42.936 "name": "BaseBdev3", 00:13:42.936 "uuid": "77643eb3-22ee-5673-a251-603153b8801d", 00:13:42.936 "is_configured": true, 00:13:42.936 "data_offset": 0, 00:13:42.936 "data_size": 65536 00:13:42.936 }, 00:13:42.936 { 00:13:42.936 "name": "BaseBdev4", 00:13:42.936 "uuid": "b7585d4b-3d78-5ee5-8294-3786d2dc3635", 00:13:42.936 "is_configured": true, 00:13:42.936 "data_offset": 0, 00:13:42.936 "data_size": 65536 00:13:42.936 } 00:13:42.936 ] 00:13:42.936 }' 00:13:42.936 13:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.196 13:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.196 13:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.196 13:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.196 13:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:43.765 94.33 IOPS, 283.00 MiB/s [2024-11-26T13:26:32.335Z] [2024-11-26 13:26:32.310430] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:44.024 [2024-11-26 13:26:32.417584] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:44.024 [2024-11-26 13:26:32.417924] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:44.024 13:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:44.024 13:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.024 13:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.024 13:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.024 13:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.024 13:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.025 13:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.025 13:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.025 13:26:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.025 13:26:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.285 13:26:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.285 13:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.285 "name": "raid_bdev1", 00:13:44.285 "uuid": "9ce4a1d7-3cc1-4a72-8f9d-2215e92cbbf7", 00:13:44.285 "strip_size_kb": 0, 00:13:44.285 "state": "online", 00:13:44.285 "raid_level": "raid1", 00:13:44.285 "superblock": false, 00:13:44.285 "num_base_bdevs": 4, 00:13:44.285 "num_base_bdevs_discovered": 3, 00:13:44.285 "num_base_bdevs_operational": 3, 00:13:44.285 "process": { 00:13:44.285 "type": "rebuild", 00:13:44.285 "target": "spare", 00:13:44.285 "progress": { 00:13:44.285 "blocks": 53248, 00:13:44.285 "percent": 81 00:13:44.285 } 00:13:44.285 }, 00:13:44.285 "base_bdevs_list": [ 00:13:44.285 { 00:13:44.285 "name": "spare", 00:13:44.285 "uuid": "461d98bb-50cc-5101-b72d-fc4531fd6d47", 00:13:44.285 "is_configured": true, 00:13:44.285 "data_offset": 0, 00:13:44.285 "data_size": 65536 00:13:44.285 }, 00:13:44.285 { 00:13:44.285 "name": null, 00:13:44.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.285 "is_configured": false, 00:13:44.285 "data_offset": 0, 00:13:44.285 "data_size": 65536 00:13:44.285 }, 00:13:44.285 { 00:13:44.285 "name": "BaseBdev3", 00:13:44.285 "uuid": "77643eb3-22ee-5673-a251-603153b8801d", 00:13:44.285 "is_configured": true, 00:13:44.285 "data_offset": 0, 00:13:44.285 "data_size": 65536 00:13:44.285 }, 00:13:44.285 { 00:13:44.285 "name": "BaseBdev4", 00:13:44.285 "uuid": "b7585d4b-3d78-5ee5-8294-3786d2dc3635", 00:13:44.285 "is_configured": true, 00:13:44.285 "data_offset": 0, 00:13:44.285 "data_size": 65536 00:13:44.285 } 00:13:44.285 ] 00:13:44.285 }' 00:13:44.285 13:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.285 13:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.285 13:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.285 13:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.285 13:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:44.854 [2024-11-26 13:26:33.198263] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:44.854 85.71 IOPS, 257.14 MiB/s [2024-11-26T13:26:33.424Z] [2024-11-26 13:26:33.303988] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:44.854 [2024-11-26 13:26:33.309515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.425 "name": "raid_bdev1", 00:13:45.425 "uuid": "9ce4a1d7-3cc1-4a72-8f9d-2215e92cbbf7", 00:13:45.425 "strip_size_kb": 0, 00:13:45.425 "state": "online", 00:13:45.425 "raid_level": "raid1", 00:13:45.425 "superblock": false, 00:13:45.425 "num_base_bdevs": 4, 00:13:45.425 "num_base_bdevs_discovered": 3, 00:13:45.425 "num_base_bdevs_operational": 3, 00:13:45.425 "base_bdevs_list": [ 00:13:45.425 { 00:13:45.425 "name": "spare", 00:13:45.425 "uuid": "461d98bb-50cc-5101-b72d-fc4531fd6d47", 00:13:45.425 "is_configured": true, 00:13:45.425 "data_offset": 0, 00:13:45.425 "data_size": 65536 00:13:45.425 }, 00:13:45.425 { 00:13:45.425 "name": null, 00:13:45.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.425 "is_configured": false, 00:13:45.425 "data_offset": 0, 00:13:45.425 "data_size": 65536 00:13:45.425 }, 00:13:45.425 { 00:13:45.425 "name": "BaseBdev3", 00:13:45.425 "uuid": "77643eb3-22ee-5673-a251-603153b8801d", 00:13:45.425 "is_configured": true, 00:13:45.425 "data_offset": 0, 00:13:45.425 "data_size": 65536 00:13:45.425 }, 00:13:45.425 { 00:13:45.425 "name": "BaseBdev4", 00:13:45.425 "uuid": "b7585d4b-3d78-5ee5-8294-3786d2dc3635", 00:13:45.425 "is_configured": true, 00:13:45.425 "data_offset": 0, 00:13:45.425 "data_size": 65536 00:13:45.425 } 00:13:45.425 ] 00:13:45.425 }' 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.425 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.425 "name": "raid_bdev1", 00:13:45.425 "uuid": "9ce4a1d7-3cc1-4a72-8f9d-2215e92cbbf7", 00:13:45.425 "strip_size_kb": 0, 00:13:45.425 "state": "online", 00:13:45.425 "raid_level": "raid1", 00:13:45.425 "superblock": false, 00:13:45.425 "num_base_bdevs": 4, 00:13:45.425 "num_base_bdevs_discovered": 3, 00:13:45.425 "num_base_bdevs_operational": 3, 00:13:45.425 "base_bdevs_list": [ 00:13:45.425 { 00:13:45.425 "name": "spare", 00:13:45.425 "uuid": "461d98bb-50cc-5101-b72d-fc4531fd6d47", 00:13:45.425 "is_configured": true, 00:13:45.425 "data_offset": 0, 00:13:45.425 "data_size": 65536 00:13:45.425 }, 00:13:45.425 { 00:13:45.425 "name": null, 00:13:45.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.426 "is_configured": false, 00:13:45.426 "data_offset": 0, 00:13:45.426 "data_size": 65536 00:13:45.426 }, 00:13:45.426 { 00:13:45.426 "name": "BaseBdev3", 00:13:45.426 "uuid": "77643eb3-22ee-5673-a251-603153b8801d", 00:13:45.426 "is_configured": true, 00:13:45.426 "data_offset": 0, 00:13:45.426 "data_size": 65536 00:13:45.426 }, 00:13:45.426 { 00:13:45.426 "name": "BaseBdev4", 00:13:45.426 "uuid": "b7585d4b-3d78-5ee5-8294-3786d2dc3635", 00:13:45.426 "is_configured": true, 00:13:45.426 "data_offset": 0, 00:13:45.426 "data_size": 65536 00:13:45.426 } 00:13:45.426 ] 00:13:45.426 }' 00:13:45.426 13:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.686 "name": "raid_bdev1", 00:13:45.686 "uuid": "9ce4a1d7-3cc1-4a72-8f9d-2215e92cbbf7", 00:13:45.686 "strip_size_kb": 0, 00:13:45.686 "state": "online", 00:13:45.686 "raid_level": "raid1", 00:13:45.686 "superblock": false, 00:13:45.686 "num_base_bdevs": 4, 00:13:45.686 "num_base_bdevs_discovered": 3, 00:13:45.686 "num_base_bdevs_operational": 3, 00:13:45.686 "base_bdevs_list": [ 00:13:45.686 { 00:13:45.686 "name": "spare", 00:13:45.686 "uuid": "461d98bb-50cc-5101-b72d-fc4531fd6d47", 00:13:45.686 "is_configured": true, 00:13:45.686 "data_offset": 0, 00:13:45.686 "data_size": 65536 00:13:45.686 }, 00:13:45.686 { 00:13:45.686 "name": null, 00:13:45.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.686 "is_configured": false, 00:13:45.686 "data_offset": 0, 00:13:45.686 "data_size": 65536 00:13:45.686 }, 00:13:45.686 { 00:13:45.686 "name": "BaseBdev3", 00:13:45.686 "uuid": "77643eb3-22ee-5673-a251-603153b8801d", 00:13:45.686 "is_configured": true, 00:13:45.686 "data_offset": 0, 00:13:45.686 "data_size": 65536 00:13:45.686 }, 00:13:45.686 { 00:13:45.686 "name": "BaseBdev4", 00:13:45.686 "uuid": "b7585d4b-3d78-5ee5-8294-3786d2dc3635", 00:13:45.686 "is_configured": true, 00:13:45.686 "data_offset": 0, 00:13:45.686 "data_size": 65536 00:13:45.686 } 00:13:45.686 ] 00:13:45.686 }' 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.686 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.205 79.25 IOPS, 237.75 MiB/s [2024-11-26T13:26:34.775Z] 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.205 [2024-11-26 13:26:34.566685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:46.205 [2024-11-26 13:26:34.566737] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.205 00:13:46.205 Latency(us) 00:13:46.205 [2024-11-26T13:26:34.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.205 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:46.205 raid_bdev1 : 8.35 77.11 231.33 0.00 0.00 18113.36 275.55 112483.61 00:13:46.205 [2024-11-26T13:26:34.775Z] =================================================================================================================== 00:13:46.205 [2024-11-26T13:26:34.775Z] Total : 77.11 231.33 0.00 0.00 18113.36 275.55 112483.61 00:13:46.205 [2024-11-26 13:26:34.630159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.205 [2024-11-26 13:26:34.630214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.205 [2024-11-26 13:26:34.630342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.205 [2024-11-26 13:26:34.630364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:46.205 { 00:13:46.205 "results": [ 00:13:46.205 { 00:13:46.205 "job": "raid_bdev1", 00:13:46.205 "core_mask": "0x1", 00:13:46.205 "workload": "randrw", 00:13:46.205 "percentage": 50, 00:13:46.205 "status": "finished", 00:13:46.205 "queue_depth": 2, 00:13:46.205 "io_size": 3145728, 00:13:46.205 "runtime": 8.351834, 00:13:46.205 "iops": 77.10881226805992, 00:13:46.205 "mibps": 231.32643680417976, 00:13:46.205 "io_failed": 0, 00:13:46.205 "io_timeout": 0, 00:13:46.205 "avg_latency_us": 18113.361671372106, 00:13:46.205 "min_latency_us": 275.5490909090909, 00:13:46.205 "max_latency_us": 112483.60727272727 00:13:46.205 } 00:13:46.205 ], 00:13:46.205 "core_count": 1 00:13:46.205 } 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:46.205 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:46.465 /dev/nbd0 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:46.465 1+0 records in 00:13:46.465 1+0 records out 00:13:46.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285614 s, 14.3 MB/s 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:46.465 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:46.466 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:46.466 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:46.466 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:46.466 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:46.466 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:46.466 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:46.466 13:26:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:46.724 /dev/nbd1 00:13:46.724 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:46.984 1+0 records in 00:13:46.984 1+0 records out 00:13:46.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678163 s, 6.0 MB/s 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:46.984 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:47.243 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:47.243 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:47.243 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:47.243 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:47.243 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:47.243 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:47.502 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:47.502 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:47.502 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:47.502 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:47.502 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:47.502 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:47.502 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:47.502 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:47.502 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:47.502 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:47.502 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:47.502 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:47.502 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:47.502 13:26:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:47.761 /dev/nbd1 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:47.761 1+0 records in 00:13:47.761 1+0 records out 00:13:47.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344693 s, 11.9 MB/s 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:47.761 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:48.021 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:48.021 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:48.021 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:48.021 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:48.021 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:48.021 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:48.021 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:48.021 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:48.021 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:48.021 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.021 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:48.021 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:48.021 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:48.021 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:48.021 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:48.280 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:48.280 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:48.280 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:48.280 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:48.280 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:48.280 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:48.280 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:48.280 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:48.280 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:48.280 13:26:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78362 00:13:48.280 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78362 ']' 00:13:48.280 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78362 00:13:48.280 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:48.539 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.539 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78362 00:13:48.539 killing process with pid 78362 00:13:48.539 Received shutdown signal, test time was about 10.606759 seconds 00:13:48.539 00:13:48.539 Latency(us) 00:13:48.539 [2024-11-26T13:26:37.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.539 [2024-11-26T13:26:37.109Z] =================================================================================================================== 00:13:48.539 [2024-11-26T13:26:37.109Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:48.539 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.539 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.539 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78362' 00:13:48.539 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78362 00:13:48.539 [2024-11-26 13:26:36.869512] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.539 13:26:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78362 00:13:48.799 [2024-11-26 13:26:37.155515] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:49.736 13:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:49.736 00:13:49.736 real 0m13.709s 00:13:49.737 user 0m18.086s 00:13:49.737 sys 0m1.725s 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.737 ************************************ 00:13:49.737 END TEST raid_rebuild_test_io 00:13:49.737 ************************************ 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.737 13:26:38 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:49.737 13:26:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:49.737 13:26:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.737 13:26:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.737 ************************************ 00:13:49.737 START TEST raid_rebuild_test_sb_io 00:13:49.737 ************************************ 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78771 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78771 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78771 ']' 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.737 13:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.737 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:49.737 Zero copy mechanism will not be used. 00:13:49.737 [2024-11-26 13:26:38.221460] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:13:49.737 [2024-11-26 13:26:38.221634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78771 ] 00:13:49.996 [2024-11-26 13:26:38.402357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.996 [2024-11-26 13:26:38.507975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.254 [2024-11-26 13:26:38.685060] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.254 [2024-11-26 13:26:38.685095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.854 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.854 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:50.854 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:50.854 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:50.854 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.854 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.854 BaseBdev1_malloc 00:13:50.854 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.854 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:50.854 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.854 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.854 [2024-11-26 13:26:39.202654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:50.854 [2024-11-26 13:26:39.202728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.854 [2024-11-26 13:26:39.202755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:50.855 [2024-11-26 13:26:39.202770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.855 [2024-11-26 13:26:39.205221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.855 [2024-11-26 13:26:39.205294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:50.855 BaseBdev1 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.855 BaseBdev2_malloc 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.855 [2024-11-26 13:26:39.248450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:50.855 [2024-11-26 13:26:39.248752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.855 [2024-11-26 13:26:39.248786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:50.855 [2024-11-26 13:26:39.248805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.855 [2024-11-26 13:26:39.251408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.855 [2024-11-26 13:26:39.251450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:50.855 BaseBdev2 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.855 BaseBdev3_malloc 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.855 [2024-11-26 13:26:39.307838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:50.855 [2024-11-26 13:26:39.307895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.855 [2024-11-26 13:26:39.307920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:50.855 [2024-11-26 13:26:39.307934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.855 [2024-11-26 13:26:39.310394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.855 [2024-11-26 13:26:39.310439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:50.855 BaseBdev3 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.855 BaseBdev4_malloc 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.855 [2024-11-26 13:26:39.353662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:50.855 [2024-11-26 13:26:39.353719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.855 [2024-11-26 13:26:39.353744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:50.855 [2024-11-26 13:26:39.353759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.855 [2024-11-26 13:26:39.356196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.855 [2024-11-26 13:26:39.356270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:50.855 BaseBdev4 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.855 spare_malloc 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.855 spare_delay 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.855 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.855 [2024-11-26 13:26:39.411290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:50.855 [2024-11-26 13:26:39.411518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.855 [2024-11-26 13:26:39.411586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:50.855 [2024-11-26 13:26:39.411609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.123 [2024-11-26 13:26:39.414155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.123 [2024-11-26 13:26:39.414200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:51.123 spare 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.123 [2024-11-26 13:26:39.423341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.123 [2024-11-26 13:26:39.425605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.123 [2024-11-26 13:26:39.425691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:51.123 [2024-11-26 13:26:39.425777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:51.123 [2024-11-26 13:26:39.425980] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:51.123 [2024-11-26 13:26:39.426005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:51.123 [2024-11-26 13:26:39.426273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:51.123 [2024-11-26 13:26:39.426473] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:51.123 [2024-11-26 13:26:39.426489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:51.123 [2024-11-26 13:26:39.426646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.123 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.123 "name": "raid_bdev1", 00:13:51.123 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:13:51.124 "strip_size_kb": 0, 00:13:51.124 "state": "online", 00:13:51.124 "raid_level": "raid1", 00:13:51.124 "superblock": true, 00:13:51.124 "num_base_bdevs": 4, 00:13:51.124 "num_base_bdevs_discovered": 4, 00:13:51.124 "num_base_bdevs_operational": 4, 00:13:51.124 "base_bdevs_list": [ 00:13:51.124 { 00:13:51.124 "name": "BaseBdev1", 00:13:51.124 "uuid": "26547a8e-9f70-5130-9deb-176cbb7491c0", 00:13:51.124 "is_configured": true, 00:13:51.124 "data_offset": 2048, 00:13:51.124 "data_size": 63488 00:13:51.124 }, 00:13:51.124 { 00:13:51.124 "name": "BaseBdev2", 00:13:51.124 "uuid": "2e76c9f3-2402-5ec4-b20b-2ecae7a14425", 00:13:51.124 "is_configured": true, 00:13:51.124 "data_offset": 2048, 00:13:51.124 "data_size": 63488 00:13:51.124 }, 00:13:51.124 { 00:13:51.124 "name": "BaseBdev3", 00:13:51.124 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:13:51.124 "is_configured": true, 00:13:51.124 "data_offset": 2048, 00:13:51.124 "data_size": 63488 00:13:51.124 }, 00:13:51.124 { 00:13:51.124 "name": "BaseBdev4", 00:13:51.124 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:13:51.124 "is_configured": true, 00:13:51.124 "data_offset": 2048, 00:13:51.124 "data_size": 63488 00:13:51.124 } 00:13:51.124 ] 00:13:51.124 }' 00:13:51.124 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.124 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.412 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:51.412 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:51.412 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.412 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.412 [2024-11-26 13:26:39.931870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.412 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.670 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:51.670 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:51.670 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.670 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.670 13:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.670 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.670 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.671 [2024-11-26 13:26:40.043473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.671 "name": "raid_bdev1", 00:13:51.671 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:13:51.671 "strip_size_kb": 0, 00:13:51.671 "state": "online", 00:13:51.671 "raid_level": "raid1", 00:13:51.671 "superblock": true, 00:13:51.671 "num_base_bdevs": 4, 00:13:51.671 "num_base_bdevs_discovered": 3, 00:13:51.671 "num_base_bdevs_operational": 3, 00:13:51.671 "base_bdevs_list": [ 00:13:51.671 { 00:13:51.671 "name": null, 00:13:51.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.671 "is_configured": false, 00:13:51.671 "data_offset": 0, 00:13:51.671 "data_size": 63488 00:13:51.671 }, 00:13:51.671 { 00:13:51.671 "name": "BaseBdev2", 00:13:51.671 "uuid": "2e76c9f3-2402-5ec4-b20b-2ecae7a14425", 00:13:51.671 "is_configured": true, 00:13:51.671 "data_offset": 2048, 00:13:51.671 "data_size": 63488 00:13:51.671 }, 00:13:51.671 { 00:13:51.671 "name": "BaseBdev3", 00:13:51.671 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:13:51.671 "is_configured": true, 00:13:51.671 "data_offset": 2048, 00:13:51.671 "data_size": 63488 00:13:51.671 }, 00:13:51.671 { 00:13:51.671 "name": "BaseBdev4", 00:13:51.671 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:13:51.671 "is_configured": true, 00:13:51.671 "data_offset": 2048, 00:13:51.671 "data_size": 63488 00:13:51.671 } 00:13:51.671 ] 00:13:51.671 }' 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.671 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.671 [2024-11-26 13:26:40.171188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:51.671 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:51.671 Zero copy mechanism will not be used. 00:13:51.671 Running I/O for 60 seconds... 00:13:52.238 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:52.238 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.238 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.238 [2024-11-26 13:26:40.560063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:52.238 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.238 13:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:52.238 [2024-11-26 13:26:40.617036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:52.238 [2024-11-26 13:26:40.619431] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:52.238 [2024-11-26 13:26:40.757007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:52.496 [2024-11-26 13:26:40.880984] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:52.496 [2024-11-26 13:26:40.881507] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:52.754 196.00 IOPS, 588.00 MiB/s [2024-11-26T13:26:41.324Z] [2024-11-26 13:26:41.221204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:52.754 [2024-11-26 13:26:41.222652] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:53.012 [2024-11-26 13:26:41.450472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:53.012 [2024-11-26 13:26:41.451264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:53.271 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.271 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.271 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.271 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.271 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.271 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.271 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.271 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.271 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.271 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.271 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.271 "name": "raid_bdev1", 00:13:53.271 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:13:53.271 "strip_size_kb": 0, 00:13:53.271 "state": "online", 00:13:53.271 "raid_level": "raid1", 00:13:53.271 "superblock": true, 00:13:53.271 "num_base_bdevs": 4, 00:13:53.271 "num_base_bdevs_discovered": 4, 00:13:53.271 "num_base_bdevs_operational": 4, 00:13:53.271 "process": { 00:13:53.271 "type": "rebuild", 00:13:53.271 "target": "spare", 00:13:53.271 "progress": { 00:13:53.271 "blocks": 10240, 00:13:53.271 "percent": 16 00:13:53.271 } 00:13:53.271 }, 00:13:53.271 "base_bdevs_list": [ 00:13:53.271 { 00:13:53.271 "name": "spare", 00:13:53.271 "uuid": "ad75f8ec-66f2-5001-a517-0fc30853bceb", 00:13:53.271 "is_configured": true, 00:13:53.271 "data_offset": 2048, 00:13:53.271 "data_size": 63488 00:13:53.271 }, 00:13:53.271 { 00:13:53.271 "name": "BaseBdev2", 00:13:53.271 "uuid": "2e76c9f3-2402-5ec4-b20b-2ecae7a14425", 00:13:53.271 "is_configured": true, 00:13:53.271 "data_offset": 2048, 00:13:53.271 "data_size": 63488 00:13:53.271 }, 00:13:53.271 { 00:13:53.271 "name": "BaseBdev3", 00:13:53.271 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:13:53.271 "is_configured": true, 00:13:53.271 "data_offset": 2048, 00:13:53.271 "data_size": 63488 00:13:53.271 }, 00:13:53.271 { 00:13:53.271 "name": "BaseBdev4", 00:13:53.271 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:13:53.271 "is_configured": true, 00:13:53.272 "data_offset": 2048, 00:13:53.272 "data_size": 63488 00:13:53.272 } 00:13:53.272 ] 00:13:53.272 }' 00:13:53.272 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.272 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.272 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.272 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.272 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:53.272 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.272 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.272 [2024-11-26 13:26:41.776321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:53.272 [2024-11-26 13:26:41.820244] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:53.272 [2024-11-26 13:26:41.830263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.272 [2024-11-26 13:26:41.830307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:53.272 [2024-11-26 13:26:41.830323] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:53.531 [2024-11-26 13:26:41.855553] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.531 "name": "raid_bdev1", 00:13:53.531 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:13:53.531 "strip_size_kb": 0, 00:13:53.531 "state": "online", 00:13:53.531 "raid_level": "raid1", 00:13:53.531 "superblock": true, 00:13:53.531 "num_base_bdevs": 4, 00:13:53.531 "num_base_bdevs_discovered": 3, 00:13:53.531 "num_base_bdevs_operational": 3, 00:13:53.531 "base_bdevs_list": [ 00:13:53.531 { 00:13:53.531 "name": null, 00:13:53.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.531 "is_configured": false, 00:13:53.531 "data_offset": 0, 00:13:53.531 "data_size": 63488 00:13:53.531 }, 00:13:53.531 { 00:13:53.531 "name": "BaseBdev2", 00:13:53.531 "uuid": "2e76c9f3-2402-5ec4-b20b-2ecae7a14425", 00:13:53.531 "is_configured": true, 00:13:53.531 "data_offset": 2048, 00:13:53.531 "data_size": 63488 00:13:53.531 }, 00:13:53.531 { 00:13:53.531 "name": "BaseBdev3", 00:13:53.531 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:13:53.531 "is_configured": true, 00:13:53.531 "data_offset": 2048, 00:13:53.531 "data_size": 63488 00:13:53.531 }, 00:13:53.531 { 00:13:53.531 "name": "BaseBdev4", 00:13:53.531 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:13:53.531 "is_configured": true, 00:13:53.531 "data_offset": 2048, 00:13:53.531 "data_size": 63488 00:13:53.531 } 00:13:53.531 ] 00:13:53.531 }' 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.531 13:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 171.50 IOPS, 514.50 MiB/s [2024-11-26T13:26:42.619Z] 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.049 "name": "raid_bdev1", 00:13:54.049 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:13:54.049 "strip_size_kb": 0, 00:13:54.049 "state": "online", 00:13:54.049 "raid_level": "raid1", 00:13:54.049 "superblock": true, 00:13:54.049 "num_base_bdevs": 4, 00:13:54.049 "num_base_bdevs_discovered": 3, 00:13:54.049 "num_base_bdevs_operational": 3, 00:13:54.049 "base_bdevs_list": [ 00:13:54.049 { 00:13:54.049 "name": null, 00:13:54.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.049 "is_configured": false, 00:13:54.049 "data_offset": 0, 00:13:54.049 "data_size": 63488 00:13:54.049 }, 00:13:54.049 { 00:13:54.049 "name": "BaseBdev2", 00:13:54.049 "uuid": "2e76c9f3-2402-5ec4-b20b-2ecae7a14425", 00:13:54.049 "is_configured": true, 00:13:54.049 "data_offset": 2048, 00:13:54.049 "data_size": 63488 00:13:54.049 }, 00:13:54.049 { 00:13:54.049 "name": "BaseBdev3", 00:13:54.049 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:13:54.049 "is_configured": true, 00:13:54.049 "data_offset": 2048, 00:13:54.049 "data_size": 63488 00:13:54.049 }, 00:13:54.049 { 00:13:54.049 "name": "BaseBdev4", 00:13:54.049 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:13:54.049 "is_configured": true, 00:13:54.049 "data_offset": 2048, 00:13:54.049 "data_size": 63488 00:13:54.049 } 00:13:54.049 ] 00:13:54.049 }' 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 [2024-11-26 13:26:42.573448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.049 13:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:54.308 [2024-11-26 13:26:42.623639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:54.308 [2024-11-26 13:26:42.625968] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:54.566 [2024-11-26 13:26:42.884120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:54.566 [2024-11-26 13:26:42.884398] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:54.825 166.33 IOPS, 499.00 MiB/s [2024-11-26T13:26:43.395Z] [2024-11-26 13:26:43.253958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:55.085 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.085 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.085 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.085 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.085 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.085 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.085 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.085 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.085 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.085 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.344 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.344 "name": "raid_bdev1", 00:13:55.344 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:13:55.344 "strip_size_kb": 0, 00:13:55.344 "state": "online", 00:13:55.344 "raid_level": "raid1", 00:13:55.344 "superblock": true, 00:13:55.344 "num_base_bdevs": 4, 00:13:55.344 "num_base_bdevs_discovered": 4, 00:13:55.344 "num_base_bdevs_operational": 4, 00:13:55.344 "process": { 00:13:55.344 "type": "rebuild", 00:13:55.344 "target": "spare", 00:13:55.344 "progress": { 00:13:55.344 "blocks": 14336, 00:13:55.344 "percent": 22 00:13:55.344 } 00:13:55.344 }, 00:13:55.344 "base_bdevs_list": [ 00:13:55.344 { 00:13:55.344 "name": "spare", 00:13:55.344 "uuid": "ad75f8ec-66f2-5001-a517-0fc30853bceb", 00:13:55.344 "is_configured": true, 00:13:55.344 "data_offset": 2048, 00:13:55.344 "data_size": 63488 00:13:55.344 }, 00:13:55.344 { 00:13:55.344 "name": "BaseBdev2", 00:13:55.344 "uuid": "2e76c9f3-2402-5ec4-b20b-2ecae7a14425", 00:13:55.344 "is_configured": true, 00:13:55.344 "data_offset": 2048, 00:13:55.344 "data_size": 63488 00:13:55.344 }, 00:13:55.344 { 00:13:55.344 "name": "BaseBdev3", 00:13:55.344 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:13:55.344 "is_configured": true, 00:13:55.344 "data_offset": 2048, 00:13:55.344 "data_size": 63488 00:13:55.344 }, 00:13:55.344 { 00:13:55.344 "name": "BaseBdev4", 00:13:55.344 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:13:55.344 "is_configured": true, 00:13:55.344 "data_offset": 2048, 00:13:55.344 "data_size": 63488 00:13:55.344 } 00:13:55.344 ] 00:13:55.344 }' 00:13:55.344 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.344 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.344 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.344 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.344 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:55.344 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:55.344 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:55.344 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:55.344 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:55.344 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:55.344 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:55.344 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.344 13:26:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.344 [2024-11-26 13:26:43.765892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:55.603 [2024-11-26 13:26:44.046924] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:55.603 [2024-11-26 13:26:44.046960] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:13:55.603 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.603 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:55.603 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:55.603 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.603 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.603 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.603 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.603 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.603 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.603 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.603 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.603 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.603 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.603 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.603 "name": "raid_bdev1", 00:13:55.603 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:13:55.603 "strip_size_kb": 0, 00:13:55.603 "state": "online", 00:13:55.603 "raid_level": "raid1", 00:13:55.603 "superblock": true, 00:13:55.603 "num_base_bdevs": 4, 00:13:55.603 "num_base_bdevs_discovered": 3, 00:13:55.603 "num_base_bdevs_operational": 3, 00:13:55.603 "process": { 00:13:55.603 "type": "rebuild", 00:13:55.603 "target": "spare", 00:13:55.603 "progress": { 00:13:55.603 "blocks": 18432, 00:13:55.603 "percent": 29 00:13:55.603 } 00:13:55.603 }, 00:13:55.603 "base_bdevs_list": [ 00:13:55.603 { 00:13:55.603 "name": "spare", 00:13:55.604 "uuid": "ad75f8ec-66f2-5001-a517-0fc30853bceb", 00:13:55.604 "is_configured": true, 00:13:55.604 "data_offset": 2048, 00:13:55.604 "data_size": 63488 00:13:55.604 }, 00:13:55.604 { 00:13:55.604 "name": null, 00:13:55.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.604 "is_configured": false, 00:13:55.604 "data_offset": 0, 00:13:55.604 "data_size": 63488 00:13:55.604 }, 00:13:55.604 { 00:13:55.604 "name": "BaseBdev3", 00:13:55.604 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:13:55.604 "is_configured": true, 00:13:55.604 "data_offset": 2048, 00:13:55.604 "data_size": 63488 00:13:55.604 }, 00:13:55.604 { 00:13:55.604 "name": "BaseBdev4", 00:13:55.604 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:13:55.604 "is_configured": true, 00:13:55.604 "data_offset": 2048, 00:13:55.604 "data_size": 63488 00:13:55.604 } 00:13:55.604 ] 00:13:55.604 }' 00:13:55.604 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.604 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.863 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.863 [2024-11-26 13:26:44.174227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:55.863 [2024-11-26 13:26:44.175426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:55.863 139.75 IOPS, 419.25 MiB/s [2024-11-26T13:26:44.433Z] 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.863 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=506 00:13:55.863 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.863 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.863 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.863 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.864 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.864 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.864 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.864 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.864 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.864 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.864 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.864 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.864 "name": "raid_bdev1", 00:13:55.864 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:13:55.864 "strip_size_kb": 0, 00:13:55.864 "state": "online", 00:13:55.864 "raid_level": "raid1", 00:13:55.864 "superblock": true, 00:13:55.864 "num_base_bdevs": 4, 00:13:55.864 "num_base_bdevs_discovered": 3, 00:13:55.864 "num_base_bdevs_operational": 3, 00:13:55.864 "process": { 00:13:55.864 "type": "rebuild", 00:13:55.864 "target": "spare", 00:13:55.864 "progress": { 00:13:55.864 "blocks": 20480, 00:13:55.864 "percent": 32 00:13:55.864 } 00:13:55.864 }, 00:13:55.864 "base_bdevs_list": [ 00:13:55.864 { 00:13:55.864 "name": "spare", 00:13:55.864 "uuid": "ad75f8ec-66f2-5001-a517-0fc30853bceb", 00:13:55.864 "is_configured": true, 00:13:55.864 "data_offset": 2048, 00:13:55.864 "data_size": 63488 00:13:55.864 }, 00:13:55.864 { 00:13:55.864 "name": null, 00:13:55.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.864 "is_configured": false, 00:13:55.864 "data_offset": 0, 00:13:55.864 "data_size": 63488 00:13:55.864 }, 00:13:55.864 { 00:13:55.864 "name": "BaseBdev3", 00:13:55.864 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:13:55.864 "is_configured": true, 00:13:55.864 "data_offset": 2048, 00:13:55.864 "data_size": 63488 00:13:55.864 }, 00:13:55.864 { 00:13:55.864 "name": "BaseBdev4", 00:13:55.864 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:13:55.864 "is_configured": true, 00:13:55.864 "data_offset": 2048, 00:13:55.864 "data_size": 63488 00:13:55.864 } 00:13:55.864 ] 00:13:55.864 }' 00:13:55.864 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.864 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.864 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.864 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.864 13:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:55.864 [2024-11-26 13:26:44.413318] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:55.864 [2024-11-26 13:26:44.419729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:56.432 [2024-11-26 13:26:44.731932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:56.432 [2024-11-26 13:26:44.732955] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:56.690 120.80 IOPS, 362.40 MiB/s [2024-11-26T13:26:45.260Z] [2024-11-26 13:26:45.209702] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:56.949 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:56.949 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.949 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.949 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.949 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.949 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.949 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.949 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.949 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.949 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.950 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.950 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.950 "name": "raid_bdev1", 00:13:56.950 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:13:56.950 "strip_size_kb": 0, 00:13:56.950 "state": "online", 00:13:56.950 "raid_level": "raid1", 00:13:56.950 "superblock": true, 00:13:56.950 "num_base_bdevs": 4, 00:13:56.950 "num_base_bdevs_discovered": 3, 00:13:56.950 "num_base_bdevs_operational": 3, 00:13:56.950 "process": { 00:13:56.950 "type": "rebuild", 00:13:56.950 "target": "spare", 00:13:56.950 "progress": { 00:13:56.950 "blocks": 32768, 00:13:56.950 "percent": 51 00:13:56.950 } 00:13:56.950 }, 00:13:56.950 "base_bdevs_list": [ 00:13:56.950 { 00:13:56.950 "name": "spare", 00:13:56.950 "uuid": "ad75f8ec-66f2-5001-a517-0fc30853bceb", 00:13:56.950 "is_configured": true, 00:13:56.950 "data_offset": 2048, 00:13:56.950 "data_size": 63488 00:13:56.950 }, 00:13:56.950 { 00:13:56.950 "name": null, 00:13:56.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.950 "is_configured": false, 00:13:56.950 "data_offset": 0, 00:13:56.950 "data_size": 63488 00:13:56.950 }, 00:13:56.950 { 00:13:56.950 "name": "BaseBdev3", 00:13:56.950 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:13:56.950 "is_configured": true, 00:13:56.950 "data_offset": 2048, 00:13:56.950 "data_size": 63488 00:13:56.950 }, 00:13:56.950 { 00:13:56.950 "name": "BaseBdev4", 00:13:56.950 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:13:56.950 "is_configured": true, 00:13:56.950 "data_offset": 2048, 00:13:56.950 "data_size": 63488 00:13:56.950 } 00:13:56.950 ] 00:13:56.950 }' 00:13:56.950 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.950 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.950 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.209 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.209 13:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:57.468 [2024-11-26 13:26:45.791952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:57.727 [2024-11-26 13:26:46.143903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:57.986 108.67 IOPS, 326.00 MiB/s [2024-11-26T13:26:46.556Z] [2024-11-26 13:26:46.481185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:57.986 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.986 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.986 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.986 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.986 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.986 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.986 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.986 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.246 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.246 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.246 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.246 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.246 "name": "raid_bdev1", 00:13:58.246 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:13:58.246 "strip_size_kb": 0, 00:13:58.246 "state": "online", 00:13:58.246 "raid_level": "raid1", 00:13:58.246 "superblock": true, 00:13:58.246 "num_base_bdevs": 4, 00:13:58.246 "num_base_bdevs_discovered": 3, 00:13:58.246 "num_base_bdevs_operational": 3, 00:13:58.246 "process": { 00:13:58.246 "type": "rebuild", 00:13:58.246 "target": "spare", 00:13:58.246 "progress": { 00:13:58.246 "blocks": 51200, 00:13:58.246 "percent": 80 00:13:58.246 } 00:13:58.246 }, 00:13:58.246 "base_bdevs_list": [ 00:13:58.246 { 00:13:58.246 "name": "spare", 00:13:58.246 "uuid": "ad75f8ec-66f2-5001-a517-0fc30853bceb", 00:13:58.246 "is_configured": true, 00:13:58.246 "data_offset": 2048, 00:13:58.246 "data_size": 63488 00:13:58.246 }, 00:13:58.246 { 00:13:58.246 "name": null, 00:13:58.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.246 "is_configured": false, 00:13:58.246 "data_offset": 0, 00:13:58.246 "data_size": 63488 00:13:58.246 }, 00:13:58.246 { 00:13:58.246 "name": "BaseBdev3", 00:13:58.246 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:13:58.246 "is_configured": true, 00:13:58.246 "data_offset": 2048, 00:13:58.246 "data_size": 63488 00:13:58.246 }, 00:13:58.246 { 00:13:58.246 "name": "BaseBdev4", 00:13:58.246 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:13:58.246 "is_configured": true, 00:13:58.246 "data_offset": 2048, 00:13:58.246 "data_size": 63488 00:13:58.246 } 00:13:58.246 ] 00:13:58.246 }' 00:13:58.246 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.246 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.246 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.246 [2024-11-26 13:26:46.706423] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:58.246 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.246 13:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:58.814 98.14 IOPS, 294.43 MiB/s [2024-11-26T13:26:47.384Z] [2024-11-26 13:26:47.348376] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:59.073 [2024-11-26 13:26:47.448386] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:59.073 [2024-11-26 13:26:47.451196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.332 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.332 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.332 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.332 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.332 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.332 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.332 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.332 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.332 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.333 "name": "raid_bdev1", 00:13:59.333 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:13:59.333 "strip_size_kb": 0, 00:13:59.333 "state": "online", 00:13:59.333 "raid_level": "raid1", 00:13:59.333 "superblock": true, 00:13:59.333 "num_base_bdevs": 4, 00:13:59.333 "num_base_bdevs_discovered": 3, 00:13:59.333 "num_base_bdevs_operational": 3, 00:13:59.333 "base_bdevs_list": [ 00:13:59.333 { 00:13:59.333 "name": "spare", 00:13:59.333 "uuid": "ad75f8ec-66f2-5001-a517-0fc30853bceb", 00:13:59.333 "is_configured": true, 00:13:59.333 "data_offset": 2048, 00:13:59.333 "data_size": 63488 00:13:59.333 }, 00:13:59.333 { 00:13:59.333 "name": null, 00:13:59.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.333 "is_configured": false, 00:13:59.333 "data_offset": 0, 00:13:59.333 "data_size": 63488 00:13:59.333 }, 00:13:59.333 { 00:13:59.333 "name": "BaseBdev3", 00:13:59.333 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:13:59.333 "is_configured": true, 00:13:59.333 "data_offset": 2048, 00:13:59.333 "data_size": 63488 00:13:59.333 }, 00:13:59.333 { 00:13:59.333 "name": "BaseBdev4", 00:13:59.333 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:13:59.333 "is_configured": true, 00:13:59.333 "data_offset": 2048, 00:13:59.333 "data_size": 63488 00:13:59.333 } 00:13:59.333 ] 00:13:59.333 }' 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.333 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.592 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.592 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.592 "name": "raid_bdev1", 00:13:59.592 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:13:59.592 "strip_size_kb": 0, 00:13:59.592 "state": "online", 00:13:59.592 "raid_level": "raid1", 00:13:59.592 "superblock": true, 00:13:59.592 "num_base_bdevs": 4, 00:13:59.592 "num_base_bdevs_discovered": 3, 00:13:59.592 "num_base_bdevs_operational": 3, 00:13:59.592 "base_bdevs_list": [ 00:13:59.592 { 00:13:59.592 "name": "spare", 00:13:59.592 "uuid": "ad75f8ec-66f2-5001-a517-0fc30853bceb", 00:13:59.592 "is_configured": true, 00:13:59.592 "data_offset": 2048, 00:13:59.592 "data_size": 63488 00:13:59.592 }, 00:13:59.592 { 00:13:59.592 "name": null, 00:13:59.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.592 "is_configured": false, 00:13:59.592 "data_offset": 0, 00:13:59.592 "data_size": 63488 00:13:59.592 }, 00:13:59.592 { 00:13:59.592 "name": "BaseBdev3", 00:13:59.592 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:13:59.592 "is_configured": true, 00:13:59.592 "data_offset": 2048, 00:13:59.592 "data_size": 63488 00:13:59.592 }, 00:13:59.592 { 00:13:59.592 "name": "BaseBdev4", 00:13:59.592 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:13:59.592 "is_configured": true, 00:13:59.592 "data_offset": 2048, 00:13:59.592 "data_size": 63488 00:13:59.592 } 00:13:59.592 ] 00:13:59.592 }' 00:13:59.592 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.592 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:59.592 13:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.592 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.592 "name": "raid_bdev1", 00:13:59.592 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:13:59.592 "strip_size_kb": 0, 00:13:59.593 "state": "online", 00:13:59.593 "raid_level": "raid1", 00:13:59.593 "superblock": true, 00:13:59.593 "num_base_bdevs": 4, 00:13:59.593 "num_base_bdevs_discovered": 3, 00:13:59.593 "num_base_bdevs_operational": 3, 00:13:59.593 "base_bdevs_list": [ 00:13:59.593 { 00:13:59.593 "name": "spare", 00:13:59.593 "uuid": "ad75f8ec-66f2-5001-a517-0fc30853bceb", 00:13:59.593 "is_configured": true, 00:13:59.593 "data_offset": 2048, 00:13:59.593 "data_size": 63488 00:13:59.593 }, 00:13:59.593 { 00:13:59.593 "name": null, 00:13:59.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.593 "is_configured": false, 00:13:59.593 "data_offset": 0, 00:13:59.593 "data_size": 63488 00:13:59.593 }, 00:13:59.593 { 00:13:59.593 "name": "BaseBdev3", 00:13:59.593 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:13:59.593 "is_configured": true, 00:13:59.593 "data_offset": 2048, 00:13:59.593 "data_size": 63488 00:13:59.593 }, 00:13:59.593 { 00:13:59.593 "name": "BaseBdev4", 00:13:59.593 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:13:59.593 "is_configured": true, 00:13:59.593 "data_offset": 2048, 00:13:59.593 "data_size": 63488 00:13:59.593 } 00:13:59.593 ] 00:13:59.593 }' 00:13:59.593 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.593 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.111 90.00 IOPS, 270.00 MiB/s [2024-11-26T13:26:48.681Z] 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:00.111 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.111 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.111 [2024-11-26 13:26:48.558028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:00.111 [2024-11-26 13:26:48.558059] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.111 00:14:00.111 Latency(us) 00:14:00.111 [2024-11-26T13:26:48.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.111 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:00.111 raid_bdev1 : 8.48 86.67 260.02 0.00 0.00 16803.90 268.10 115819.99 00:14:00.111 [2024-11-26T13:26:48.681Z] =================================================================================================================== 00:14:00.111 [2024-11-26T13:26:48.681Z] Total : 86.67 260.02 0.00 0.00 16803.90 268.10 115819.99 00:14:00.111 [2024-11-26 13:26:48.669138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.111 [2024-11-26 13:26:48.669332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.111 [2024-11-26 13:26:48.669494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:14:00.111 "results": [ 00:14:00.111 { 00:14:00.111 "job": "raid_bdev1", 00:14:00.111 "core_mask": "0x1", 00:14:00.111 "workload": "randrw", 00:14:00.111 "percentage": 50, 00:14:00.111 "status": "finished", 00:14:00.111 "queue_depth": 2, 00:14:00.111 "io_size": 3145728, 00:14:00.111 "runtime": 8.480273, 00:14:00.111 "iops": 86.6717380442823, 00:14:00.111 "mibps": 260.01521413284695, 00:14:00.111 "io_failed": 0, 00:14:00.111 "io_timeout": 0, 00:14:00.111 "avg_latency_us": 16803.90206060606, 00:14:00.111 "min_latency_us": 268.1018181818182, 00:14:00.111 "max_latency_us": 115819.98545454546 00:14:00.111 } 00:14:00.111 ], 00:14:00.111 "core_count": 1 00:14:00.111 } 00:14:00.111 ee all in destruct 00:14:00.111 [2024-11-26 13:26:48.669760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:00.111 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.371 13:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:00.631 /dev/nbd0 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.632 1+0 records in 00:14:00.632 1+0 records out 00:14:00.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347847 s, 11.8 MB/s 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.632 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:00.901 /dev/nbd1 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.901 1+0 records in 00:14:00.901 1+0 records out 00:14:00.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000853034 s, 4.8 MB/s 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.901 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:01.163 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:01.163 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.163 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:01.163 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.164 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:01.164 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.164 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:01.164 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:01.164 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:01.164 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:01.164 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.164 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.164 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:01.423 /dev/nbd1 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.423 1+0 records in 00:14:01.423 1+0 records out 00:14:01.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248839 s, 16.5 MB/s 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.423 13:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:01.682 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:01.682 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.682 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:01.682 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.682 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:01.682 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.682 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:01.941 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:01.941 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:01.941 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:01.941 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.941 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.942 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:01.942 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:01.942 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.942 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:01.942 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.942 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:01.942 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.942 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:01.942 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.942 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.201 [2024-11-26 13:26:50.688531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:02.201 [2024-11-26 13:26:50.688597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.201 [2024-11-26 13:26:50.688626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:02.201 [2024-11-26 13:26:50.688639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.201 [2024-11-26 13:26:50.691149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.201 [2024-11-26 13:26:50.691352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:02.201 [2024-11-26 13:26:50.691478] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:02.201 [2024-11-26 13:26:50.691544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.201 [2024-11-26 13:26:50.691719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.201 [2024-11-26 13:26:50.691836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:02.201 spare 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.201 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.461 [2024-11-26 13:26:50.791944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:02.461 [2024-11-26 13:26:50.791969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:02.461 [2024-11-26 13:26:50.792279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:02.461 [2024-11-26 13:26:50.792471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:02.461 [2024-11-26 13:26:50.792495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:02.461 [2024-11-26 13:26:50.792670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.461 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.462 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.462 "name": "raid_bdev1", 00:14:02.462 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:14:02.462 "strip_size_kb": 0, 00:14:02.462 "state": "online", 00:14:02.462 "raid_level": "raid1", 00:14:02.462 "superblock": true, 00:14:02.462 "num_base_bdevs": 4, 00:14:02.462 "num_base_bdevs_discovered": 3, 00:14:02.462 "num_base_bdevs_operational": 3, 00:14:02.462 "base_bdevs_list": [ 00:14:02.462 { 00:14:02.462 "name": "spare", 00:14:02.462 "uuid": "ad75f8ec-66f2-5001-a517-0fc30853bceb", 00:14:02.462 "is_configured": true, 00:14:02.462 "data_offset": 2048, 00:14:02.462 "data_size": 63488 00:14:02.462 }, 00:14:02.462 { 00:14:02.462 "name": null, 00:14:02.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.462 "is_configured": false, 00:14:02.462 "data_offset": 2048, 00:14:02.462 "data_size": 63488 00:14:02.462 }, 00:14:02.462 { 00:14:02.462 "name": "BaseBdev3", 00:14:02.462 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:14:02.462 "is_configured": true, 00:14:02.462 "data_offset": 2048, 00:14:02.462 "data_size": 63488 00:14:02.462 }, 00:14:02.462 { 00:14:02.462 "name": "BaseBdev4", 00:14:02.462 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:14:02.462 "is_configured": true, 00:14:02.462 "data_offset": 2048, 00:14:02.462 "data_size": 63488 00:14:02.462 } 00:14:02.462 ] 00:14:02.462 }' 00:14:02.462 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.462 13:26:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.031 "name": "raid_bdev1", 00:14:03.031 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:14:03.031 "strip_size_kb": 0, 00:14:03.031 "state": "online", 00:14:03.031 "raid_level": "raid1", 00:14:03.031 "superblock": true, 00:14:03.031 "num_base_bdevs": 4, 00:14:03.031 "num_base_bdevs_discovered": 3, 00:14:03.031 "num_base_bdevs_operational": 3, 00:14:03.031 "base_bdevs_list": [ 00:14:03.031 { 00:14:03.031 "name": "spare", 00:14:03.031 "uuid": "ad75f8ec-66f2-5001-a517-0fc30853bceb", 00:14:03.031 "is_configured": true, 00:14:03.031 "data_offset": 2048, 00:14:03.031 "data_size": 63488 00:14:03.031 }, 00:14:03.031 { 00:14:03.031 "name": null, 00:14:03.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.031 "is_configured": false, 00:14:03.031 "data_offset": 2048, 00:14:03.031 "data_size": 63488 00:14:03.031 }, 00:14:03.031 { 00:14:03.031 "name": "BaseBdev3", 00:14:03.031 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:14:03.031 "is_configured": true, 00:14:03.031 "data_offset": 2048, 00:14:03.031 "data_size": 63488 00:14:03.031 }, 00:14:03.031 { 00:14:03.031 "name": "BaseBdev4", 00:14:03.031 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:14:03.031 "is_configured": true, 00:14:03.031 "data_offset": 2048, 00:14:03.031 "data_size": 63488 00:14:03.031 } 00:14:03.031 ] 00:14:03.031 }' 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.031 [2024-11-26 13:26:51.540881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.031 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.290 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.290 "name": "raid_bdev1", 00:14:03.290 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:14:03.290 "strip_size_kb": 0, 00:14:03.290 "state": "online", 00:14:03.290 "raid_level": "raid1", 00:14:03.290 "superblock": true, 00:14:03.290 "num_base_bdevs": 4, 00:14:03.290 "num_base_bdevs_discovered": 2, 00:14:03.290 "num_base_bdevs_operational": 2, 00:14:03.290 "base_bdevs_list": [ 00:14:03.290 { 00:14:03.290 "name": null, 00:14:03.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.290 "is_configured": false, 00:14:03.290 "data_offset": 0, 00:14:03.290 "data_size": 63488 00:14:03.290 }, 00:14:03.290 { 00:14:03.290 "name": null, 00:14:03.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.290 "is_configured": false, 00:14:03.290 "data_offset": 2048, 00:14:03.290 "data_size": 63488 00:14:03.290 }, 00:14:03.290 { 00:14:03.290 "name": "BaseBdev3", 00:14:03.290 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:14:03.290 "is_configured": true, 00:14:03.290 "data_offset": 2048, 00:14:03.290 "data_size": 63488 00:14:03.290 }, 00:14:03.291 { 00:14:03.291 "name": "BaseBdev4", 00:14:03.291 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:14:03.291 "is_configured": true, 00:14:03.291 "data_offset": 2048, 00:14:03.291 "data_size": 63488 00:14:03.291 } 00:14:03.291 ] 00:14:03.291 }' 00:14:03.291 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.291 13:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.550 13:26:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:03.550 13:26:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.550 13:26:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.550 [2024-11-26 13:26:52.061032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.550 [2024-11-26 13:26:52.061171] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:03.550 [2024-11-26 13:26:52.061189] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:03.550 [2024-11-26 13:26:52.061227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.550 [2024-11-26 13:26:52.072888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:03.550 13:26:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.550 13:26:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:03.550 [2024-11-26 13:26:52.075065] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.930 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.930 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.930 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.930 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.930 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.930 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.930 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.930 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.930 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.930 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.930 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.930 "name": "raid_bdev1", 00:14:04.930 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:14:04.930 "strip_size_kb": 0, 00:14:04.930 "state": "online", 00:14:04.930 "raid_level": "raid1", 00:14:04.930 "superblock": true, 00:14:04.930 "num_base_bdevs": 4, 00:14:04.930 "num_base_bdevs_discovered": 3, 00:14:04.930 "num_base_bdevs_operational": 3, 00:14:04.930 "process": { 00:14:04.930 "type": "rebuild", 00:14:04.930 "target": "spare", 00:14:04.930 "progress": { 00:14:04.930 "blocks": 20480, 00:14:04.930 "percent": 32 00:14:04.930 } 00:14:04.930 }, 00:14:04.930 "base_bdevs_list": [ 00:14:04.930 { 00:14:04.930 "name": "spare", 00:14:04.930 "uuid": "ad75f8ec-66f2-5001-a517-0fc30853bceb", 00:14:04.930 "is_configured": true, 00:14:04.930 "data_offset": 2048, 00:14:04.930 "data_size": 63488 00:14:04.930 }, 00:14:04.930 { 00:14:04.930 "name": null, 00:14:04.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.930 "is_configured": false, 00:14:04.930 "data_offset": 2048, 00:14:04.930 "data_size": 63488 00:14:04.930 }, 00:14:04.930 { 00:14:04.930 "name": "BaseBdev3", 00:14:04.930 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:14:04.930 "is_configured": true, 00:14:04.930 "data_offset": 2048, 00:14:04.930 "data_size": 63488 00:14:04.930 }, 00:14:04.930 { 00:14:04.930 "name": "BaseBdev4", 00:14:04.930 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:14:04.930 "is_configured": true, 00:14:04.930 "data_offset": 2048, 00:14:04.930 "data_size": 63488 00:14:04.930 } 00:14:04.930 ] 00:14:04.930 }' 00:14:04.930 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.930 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.930 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.931 [2024-11-26 13:26:53.244587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.931 [2024-11-26 13:26:53.284542] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:04.931 [2024-11-26 13:26:53.284603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.931 [2024-11-26 13:26:53.284628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.931 [2024-11-26 13:26:53.284638] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.931 "name": "raid_bdev1", 00:14:04.931 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:14:04.931 "strip_size_kb": 0, 00:14:04.931 "state": "online", 00:14:04.931 "raid_level": "raid1", 00:14:04.931 "superblock": true, 00:14:04.931 "num_base_bdevs": 4, 00:14:04.931 "num_base_bdevs_discovered": 2, 00:14:04.931 "num_base_bdevs_operational": 2, 00:14:04.931 "base_bdevs_list": [ 00:14:04.931 { 00:14:04.931 "name": null, 00:14:04.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.931 "is_configured": false, 00:14:04.931 "data_offset": 0, 00:14:04.931 "data_size": 63488 00:14:04.931 }, 00:14:04.931 { 00:14:04.931 "name": null, 00:14:04.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.931 "is_configured": false, 00:14:04.931 "data_offset": 2048, 00:14:04.931 "data_size": 63488 00:14:04.931 }, 00:14:04.931 { 00:14:04.931 "name": "BaseBdev3", 00:14:04.931 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:14:04.931 "is_configured": true, 00:14:04.931 "data_offset": 2048, 00:14:04.931 "data_size": 63488 00:14:04.931 }, 00:14:04.931 { 00:14:04.931 "name": "BaseBdev4", 00:14:04.931 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:14:04.931 "is_configured": true, 00:14:04.931 "data_offset": 2048, 00:14:04.931 "data_size": 63488 00:14:04.931 } 00:14:04.931 ] 00:14:04.931 }' 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.931 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.499 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:05.499 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.499 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.499 [2024-11-26 13:26:53.817893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:05.499 [2024-11-26 13:26:53.818098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.499 [2024-11-26 13:26:53.818143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:05.499 [2024-11-26 13:26:53.818157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.499 [2024-11-26 13:26:53.818720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.499 [2024-11-26 13:26:53.818751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:05.499 [2024-11-26 13:26:53.818855] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:05.499 [2024-11-26 13:26:53.818872] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:05.499 [2024-11-26 13:26:53.818886] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:05.499 [2024-11-26 13:26:53.818912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.499 [2024-11-26 13:26:53.828837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:05.499 spare 00:14:05.499 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.499 13:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:05.499 [2024-11-26 13:26:53.831189] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.436 "name": "raid_bdev1", 00:14:06.436 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:14:06.436 "strip_size_kb": 0, 00:14:06.436 "state": "online", 00:14:06.436 "raid_level": "raid1", 00:14:06.436 "superblock": true, 00:14:06.436 "num_base_bdevs": 4, 00:14:06.436 "num_base_bdevs_discovered": 3, 00:14:06.436 "num_base_bdevs_operational": 3, 00:14:06.436 "process": { 00:14:06.436 "type": "rebuild", 00:14:06.436 "target": "spare", 00:14:06.436 "progress": { 00:14:06.436 "blocks": 20480, 00:14:06.436 "percent": 32 00:14:06.436 } 00:14:06.436 }, 00:14:06.436 "base_bdevs_list": [ 00:14:06.436 { 00:14:06.436 "name": "spare", 00:14:06.436 "uuid": "ad75f8ec-66f2-5001-a517-0fc30853bceb", 00:14:06.436 "is_configured": true, 00:14:06.436 "data_offset": 2048, 00:14:06.436 "data_size": 63488 00:14:06.436 }, 00:14:06.436 { 00:14:06.436 "name": null, 00:14:06.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.436 "is_configured": false, 00:14:06.436 "data_offset": 2048, 00:14:06.436 "data_size": 63488 00:14:06.436 }, 00:14:06.436 { 00:14:06.436 "name": "BaseBdev3", 00:14:06.436 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:14:06.436 "is_configured": true, 00:14:06.436 "data_offset": 2048, 00:14:06.436 "data_size": 63488 00:14:06.436 }, 00:14:06.436 { 00:14:06.436 "name": "BaseBdev4", 00:14:06.436 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:14:06.436 "is_configured": true, 00:14:06.436 "data_offset": 2048, 00:14:06.436 "data_size": 63488 00:14:06.436 } 00:14:06.436 ] 00:14:06.436 }' 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.436 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.436 [2024-11-26 13:26:54.997577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.696 [2024-11-26 13:26:55.038056] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:06.696 [2024-11-26 13:26:55.038122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.696 [2024-11-26 13:26:55.038143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.696 [2024-11-26 13:26:55.038154] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.696 "name": "raid_bdev1", 00:14:06.696 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:14:06.696 "strip_size_kb": 0, 00:14:06.696 "state": "online", 00:14:06.696 "raid_level": "raid1", 00:14:06.696 "superblock": true, 00:14:06.696 "num_base_bdevs": 4, 00:14:06.696 "num_base_bdevs_discovered": 2, 00:14:06.696 "num_base_bdevs_operational": 2, 00:14:06.696 "base_bdevs_list": [ 00:14:06.696 { 00:14:06.696 "name": null, 00:14:06.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.696 "is_configured": false, 00:14:06.696 "data_offset": 0, 00:14:06.696 "data_size": 63488 00:14:06.696 }, 00:14:06.696 { 00:14:06.696 "name": null, 00:14:06.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.696 "is_configured": false, 00:14:06.696 "data_offset": 2048, 00:14:06.696 "data_size": 63488 00:14:06.696 }, 00:14:06.696 { 00:14:06.696 "name": "BaseBdev3", 00:14:06.696 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:14:06.696 "is_configured": true, 00:14:06.696 "data_offset": 2048, 00:14:06.696 "data_size": 63488 00:14:06.696 }, 00:14:06.696 { 00:14:06.696 "name": "BaseBdev4", 00:14:06.696 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:14:06.696 "is_configured": true, 00:14:06.696 "data_offset": 2048, 00:14:06.696 "data_size": 63488 00:14:06.696 } 00:14:06.696 ] 00:14:06.696 }' 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.696 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.266 "name": "raid_bdev1", 00:14:07.266 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:14:07.266 "strip_size_kb": 0, 00:14:07.266 "state": "online", 00:14:07.266 "raid_level": "raid1", 00:14:07.266 "superblock": true, 00:14:07.266 "num_base_bdevs": 4, 00:14:07.266 "num_base_bdevs_discovered": 2, 00:14:07.266 "num_base_bdevs_operational": 2, 00:14:07.266 "base_bdevs_list": [ 00:14:07.266 { 00:14:07.266 "name": null, 00:14:07.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.266 "is_configured": false, 00:14:07.266 "data_offset": 0, 00:14:07.266 "data_size": 63488 00:14:07.266 }, 00:14:07.266 { 00:14:07.266 "name": null, 00:14:07.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.266 "is_configured": false, 00:14:07.266 "data_offset": 2048, 00:14:07.266 "data_size": 63488 00:14:07.266 }, 00:14:07.266 { 00:14:07.266 "name": "BaseBdev3", 00:14:07.266 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:14:07.266 "is_configured": true, 00:14:07.266 "data_offset": 2048, 00:14:07.266 "data_size": 63488 00:14:07.266 }, 00:14:07.266 { 00:14:07.266 "name": "BaseBdev4", 00:14:07.266 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:14:07.266 "is_configured": true, 00:14:07.266 "data_offset": 2048, 00:14:07.266 "data_size": 63488 00:14:07.266 } 00:14:07.266 ] 00:14:07.266 }' 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.266 [2024-11-26 13:26:55.755102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:07.266 [2024-11-26 13:26:55.755382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.266 [2024-11-26 13:26:55.755416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:07.266 [2024-11-26 13:26:55.755434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.266 [2024-11-26 13:26:55.755942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.266 [2024-11-26 13:26:55.755977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:07.266 [2024-11-26 13:26:55.756059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:07.266 [2024-11-26 13:26:55.756081] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:07.266 [2024-11-26 13:26:55.756091] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:07.266 [2024-11-26 13:26:55.756103] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:07.266 BaseBdev1 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.266 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:08.204 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:08.204 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.204 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.204 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.204 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.204 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.204 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.204 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.204 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.204 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.463 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.463 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.463 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.463 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.463 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.463 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.463 "name": "raid_bdev1", 00:14:08.463 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:14:08.463 "strip_size_kb": 0, 00:14:08.463 "state": "online", 00:14:08.463 "raid_level": "raid1", 00:14:08.463 "superblock": true, 00:14:08.463 "num_base_bdevs": 4, 00:14:08.463 "num_base_bdevs_discovered": 2, 00:14:08.463 "num_base_bdevs_operational": 2, 00:14:08.463 "base_bdevs_list": [ 00:14:08.463 { 00:14:08.463 "name": null, 00:14:08.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.463 "is_configured": false, 00:14:08.463 "data_offset": 0, 00:14:08.463 "data_size": 63488 00:14:08.463 }, 00:14:08.463 { 00:14:08.463 "name": null, 00:14:08.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.463 "is_configured": false, 00:14:08.463 "data_offset": 2048, 00:14:08.463 "data_size": 63488 00:14:08.463 }, 00:14:08.463 { 00:14:08.463 "name": "BaseBdev3", 00:14:08.463 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:14:08.463 "is_configured": true, 00:14:08.463 "data_offset": 2048, 00:14:08.463 "data_size": 63488 00:14:08.463 }, 00:14:08.463 { 00:14:08.463 "name": "BaseBdev4", 00:14:08.463 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:14:08.463 "is_configured": true, 00:14:08.463 "data_offset": 2048, 00:14:08.463 "data_size": 63488 00:14:08.463 } 00:14:08.463 ] 00:14:08.463 }' 00:14:08.463 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.463 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.722 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.722 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.722 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.722 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.722 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.981 "name": "raid_bdev1", 00:14:08.981 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:14:08.981 "strip_size_kb": 0, 00:14:08.981 "state": "online", 00:14:08.981 "raid_level": "raid1", 00:14:08.981 "superblock": true, 00:14:08.981 "num_base_bdevs": 4, 00:14:08.981 "num_base_bdevs_discovered": 2, 00:14:08.981 "num_base_bdevs_operational": 2, 00:14:08.981 "base_bdevs_list": [ 00:14:08.981 { 00:14:08.981 "name": null, 00:14:08.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.981 "is_configured": false, 00:14:08.981 "data_offset": 0, 00:14:08.981 "data_size": 63488 00:14:08.981 }, 00:14:08.981 { 00:14:08.981 "name": null, 00:14:08.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.981 "is_configured": false, 00:14:08.981 "data_offset": 2048, 00:14:08.981 "data_size": 63488 00:14:08.981 }, 00:14:08.981 { 00:14:08.981 "name": "BaseBdev3", 00:14:08.981 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:14:08.981 "is_configured": true, 00:14:08.981 "data_offset": 2048, 00:14:08.981 "data_size": 63488 00:14:08.981 }, 00:14:08.981 { 00:14:08.981 "name": "BaseBdev4", 00:14:08.981 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:14:08.981 "is_configured": true, 00:14:08.981 "data_offset": 2048, 00:14:08.981 "data_size": 63488 00:14:08.981 } 00:14:08.981 ] 00:14:08.981 }' 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.981 [2024-11-26 13:26:57.455790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.981 [2024-11-26 13:26:57.455909] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:08.981 [2024-11-26 13:26:57.455926] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:08.981 request: 00:14:08.981 { 00:14:08.981 "base_bdev": "BaseBdev1", 00:14:08.981 "raid_bdev": "raid_bdev1", 00:14:08.981 "method": "bdev_raid_add_base_bdev", 00:14:08.981 "req_id": 1 00:14:08.981 } 00:14:08.981 Got JSON-RPC error response 00:14:08.981 response: 00:14:08.981 { 00:14:08.981 "code": -22, 00:14:08.981 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:08.981 } 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:08.981 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:09.919 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:09.919 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.919 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.919 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.919 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.919 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.919 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.919 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.919 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.919 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.919 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.919 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.919 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.919 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.179 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.179 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.179 "name": "raid_bdev1", 00:14:10.179 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:14:10.179 "strip_size_kb": 0, 00:14:10.179 "state": "online", 00:14:10.179 "raid_level": "raid1", 00:14:10.179 "superblock": true, 00:14:10.179 "num_base_bdevs": 4, 00:14:10.179 "num_base_bdevs_discovered": 2, 00:14:10.179 "num_base_bdevs_operational": 2, 00:14:10.179 "base_bdevs_list": [ 00:14:10.179 { 00:14:10.179 "name": null, 00:14:10.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.179 "is_configured": false, 00:14:10.179 "data_offset": 0, 00:14:10.179 "data_size": 63488 00:14:10.179 }, 00:14:10.179 { 00:14:10.179 "name": null, 00:14:10.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.179 "is_configured": false, 00:14:10.179 "data_offset": 2048, 00:14:10.179 "data_size": 63488 00:14:10.179 }, 00:14:10.179 { 00:14:10.179 "name": "BaseBdev3", 00:14:10.179 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:14:10.179 "is_configured": true, 00:14:10.179 "data_offset": 2048, 00:14:10.179 "data_size": 63488 00:14:10.179 }, 00:14:10.179 { 00:14:10.179 "name": "BaseBdev4", 00:14:10.179 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:14:10.179 "is_configured": true, 00:14:10.179 "data_offset": 2048, 00:14:10.179 "data_size": 63488 00:14:10.179 } 00:14:10.179 ] 00:14:10.179 }' 00:14:10.179 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.179 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.438 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.438 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.438 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.438 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.438 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.438 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.438 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.438 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.438 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.705 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.705 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.705 "name": "raid_bdev1", 00:14:10.706 "uuid": "b614eb56-4c46-4595-b6da-6ffda70343a0", 00:14:10.706 "strip_size_kb": 0, 00:14:10.706 "state": "online", 00:14:10.706 "raid_level": "raid1", 00:14:10.706 "superblock": true, 00:14:10.706 "num_base_bdevs": 4, 00:14:10.706 "num_base_bdevs_discovered": 2, 00:14:10.706 "num_base_bdevs_operational": 2, 00:14:10.706 "base_bdevs_list": [ 00:14:10.706 { 00:14:10.706 "name": null, 00:14:10.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.706 "is_configured": false, 00:14:10.706 "data_offset": 0, 00:14:10.706 "data_size": 63488 00:14:10.706 }, 00:14:10.706 { 00:14:10.706 "name": null, 00:14:10.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.706 "is_configured": false, 00:14:10.706 "data_offset": 2048, 00:14:10.706 "data_size": 63488 00:14:10.706 }, 00:14:10.706 { 00:14:10.706 "name": "BaseBdev3", 00:14:10.706 "uuid": "f7a21a4c-e679-5e16-8555-a88481c944e5", 00:14:10.706 "is_configured": true, 00:14:10.706 "data_offset": 2048, 00:14:10.706 "data_size": 63488 00:14:10.706 }, 00:14:10.706 { 00:14:10.706 "name": "BaseBdev4", 00:14:10.706 "uuid": "3a25e41c-fd66-5122-a0b8-5d308c8840db", 00:14:10.706 "is_configured": true, 00:14:10.706 "data_offset": 2048, 00:14:10.706 "data_size": 63488 00:14:10.706 } 00:14:10.706 ] 00:14:10.706 }' 00:14:10.706 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.706 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.706 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.706 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.706 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78771 00:14:10.706 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78771 ']' 00:14:10.706 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78771 00:14:10.706 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:10.706 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.706 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78771 00:14:10.706 killing process with pid 78771 00:14:10.706 Received shutdown signal, test time was about 19.005252 seconds 00:14:10.706 00:14:10.706 Latency(us) 00:14:10.706 [2024-11-26T13:26:59.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.706 [2024-11-26T13:26:59.276Z] =================================================================================================================== 00:14:10.706 [2024-11-26T13:26:59.276Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:10.706 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.706 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.706 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78771' 00:14:10.706 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78771 00:14:10.706 [2024-11-26 13:26:59.178970] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.706 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78771 00:14:10.706 [2024-11-26 13:26:59.179064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.706 [2024-11-26 13:26:59.179142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.706 [2024-11-26 13:26:59.179156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:10.972 [2024-11-26 13:26:59.480613] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:11.910 ************************************ 00:14:11.910 END TEST raid_rebuild_test_sb_io 00:14:11.910 ************************************ 00:14:11.910 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:11.910 00:14:11.910 real 0m22.334s 00:14:11.910 user 0m30.531s 00:14:11.910 sys 0m2.229s 00:14:11.910 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.910 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.169 13:27:00 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:12.169 13:27:00 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:12.169 13:27:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:12.169 13:27:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.169 13:27:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.169 ************************************ 00:14:12.169 START TEST raid5f_state_function_test 00:14:12.169 ************************************ 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.170 Process raid pid: 79503 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79503 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79503' 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79503 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79503 ']' 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.170 13:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.170 [2024-11-26 13:27:00.616010] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:14:12.170 [2024-11-26 13:27:00.616412] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.429 [2024-11-26 13:27:00.798347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.429 [2024-11-26 13:27:00.910342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.688 [2024-11-26 13:27:01.100849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.688 [2024-11-26 13:27:01.101140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.256 [2024-11-26 13:27:01.606352] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:13.256 [2024-11-26 13:27:01.606427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:13.256 [2024-11-26 13:27:01.606443] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:13.256 [2024-11-26 13:27:01.606457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:13.256 [2024-11-26 13:27:01.606465] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:13.256 [2024-11-26 13:27:01.606477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.256 13:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.256 "name": "Existed_Raid", 00:14:13.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.256 "strip_size_kb": 64, 00:14:13.256 "state": "configuring", 00:14:13.256 "raid_level": "raid5f", 00:14:13.256 "superblock": false, 00:14:13.256 "num_base_bdevs": 3, 00:14:13.256 "num_base_bdevs_discovered": 0, 00:14:13.256 "num_base_bdevs_operational": 3, 00:14:13.256 "base_bdevs_list": [ 00:14:13.256 { 00:14:13.256 "name": "BaseBdev1", 00:14:13.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.256 "is_configured": false, 00:14:13.256 "data_offset": 0, 00:14:13.256 "data_size": 0 00:14:13.256 }, 00:14:13.256 { 00:14:13.256 "name": "BaseBdev2", 00:14:13.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.256 "is_configured": false, 00:14:13.257 "data_offset": 0, 00:14:13.257 "data_size": 0 00:14:13.257 }, 00:14:13.257 { 00:14:13.257 "name": "BaseBdev3", 00:14:13.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.257 "is_configured": false, 00:14:13.257 "data_offset": 0, 00:14:13.257 "data_size": 0 00:14:13.257 } 00:14:13.257 ] 00:14:13.257 }' 00:14:13.257 13:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.257 13:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.824 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:13.824 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.824 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.824 [2024-11-26 13:27:02.130381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:13.824 [2024-11-26 13:27:02.130668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:13.824 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.824 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:13.824 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.824 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.824 [2024-11-26 13:27:02.138390] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:13.825 [2024-11-26 13:27:02.138437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:13.825 [2024-11-26 13:27:02.138449] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:13.825 [2024-11-26 13:27:02.138463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:13.825 [2024-11-26 13:27:02.138470] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:13.825 [2024-11-26 13:27:02.138482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.825 [2024-11-26 13:27:02.180731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.825 BaseBdev1 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.825 [ 00:14:13.825 { 00:14:13.825 "name": "BaseBdev1", 00:14:13.825 "aliases": [ 00:14:13.825 "e88f1618-4969-411d-b14c-22f56023f69b" 00:14:13.825 ], 00:14:13.825 "product_name": "Malloc disk", 00:14:13.825 "block_size": 512, 00:14:13.825 "num_blocks": 65536, 00:14:13.825 "uuid": "e88f1618-4969-411d-b14c-22f56023f69b", 00:14:13.825 "assigned_rate_limits": { 00:14:13.825 "rw_ios_per_sec": 0, 00:14:13.825 "rw_mbytes_per_sec": 0, 00:14:13.825 "r_mbytes_per_sec": 0, 00:14:13.825 "w_mbytes_per_sec": 0 00:14:13.825 }, 00:14:13.825 "claimed": true, 00:14:13.825 "claim_type": "exclusive_write", 00:14:13.825 "zoned": false, 00:14:13.825 "supported_io_types": { 00:14:13.825 "read": true, 00:14:13.825 "write": true, 00:14:13.825 "unmap": true, 00:14:13.825 "flush": true, 00:14:13.825 "reset": true, 00:14:13.825 "nvme_admin": false, 00:14:13.825 "nvme_io": false, 00:14:13.825 "nvme_io_md": false, 00:14:13.825 "write_zeroes": true, 00:14:13.825 "zcopy": true, 00:14:13.825 "get_zone_info": false, 00:14:13.825 "zone_management": false, 00:14:13.825 "zone_append": false, 00:14:13.825 "compare": false, 00:14:13.825 "compare_and_write": false, 00:14:13.825 "abort": true, 00:14:13.825 "seek_hole": false, 00:14:13.825 "seek_data": false, 00:14:13.825 "copy": true, 00:14:13.825 "nvme_iov_md": false 00:14:13.825 }, 00:14:13.825 "memory_domains": [ 00:14:13.825 { 00:14:13.825 "dma_device_id": "system", 00:14:13.825 "dma_device_type": 1 00:14:13.825 }, 00:14:13.825 { 00:14:13.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.825 "dma_device_type": 2 00:14:13.825 } 00:14:13.825 ], 00:14:13.825 "driver_specific": {} 00:14:13.825 } 00:14:13.825 ] 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.825 "name": "Existed_Raid", 00:14:13.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.825 "strip_size_kb": 64, 00:14:13.825 "state": "configuring", 00:14:13.825 "raid_level": "raid5f", 00:14:13.825 "superblock": false, 00:14:13.825 "num_base_bdevs": 3, 00:14:13.825 "num_base_bdevs_discovered": 1, 00:14:13.825 "num_base_bdevs_operational": 3, 00:14:13.825 "base_bdevs_list": [ 00:14:13.825 { 00:14:13.825 "name": "BaseBdev1", 00:14:13.825 "uuid": "e88f1618-4969-411d-b14c-22f56023f69b", 00:14:13.825 "is_configured": true, 00:14:13.825 "data_offset": 0, 00:14:13.825 "data_size": 65536 00:14:13.825 }, 00:14:13.825 { 00:14:13.825 "name": "BaseBdev2", 00:14:13.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.825 "is_configured": false, 00:14:13.825 "data_offset": 0, 00:14:13.825 "data_size": 0 00:14:13.825 }, 00:14:13.825 { 00:14:13.825 "name": "BaseBdev3", 00:14:13.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.825 "is_configured": false, 00:14:13.825 "data_offset": 0, 00:14:13.825 "data_size": 0 00:14:13.825 } 00:14:13.825 ] 00:14:13.825 }' 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.825 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.394 [2024-11-26 13:27:02.732838] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:14.394 [2024-11-26 13:27:02.732876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.394 [2024-11-26 13:27:02.740899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.394 [2024-11-26 13:27:02.743022] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:14.394 [2024-11-26 13:27:02.743322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:14.394 [2024-11-26 13:27:02.743349] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:14.394 [2024-11-26 13:27:02.743365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.394 "name": "Existed_Raid", 00:14:14.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.394 "strip_size_kb": 64, 00:14:14.394 "state": "configuring", 00:14:14.394 "raid_level": "raid5f", 00:14:14.394 "superblock": false, 00:14:14.394 "num_base_bdevs": 3, 00:14:14.394 "num_base_bdevs_discovered": 1, 00:14:14.394 "num_base_bdevs_operational": 3, 00:14:14.394 "base_bdevs_list": [ 00:14:14.394 { 00:14:14.394 "name": "BaseBdev1", 00:14:14.394 "uuid": "e88f1618-4969-411d-b14c-22f56023f69b", 00:14:14.394 "is_configured": true, 00:14:14.394 "data_offset": 0, 00:14:14.394 "data_size": 65536 00:14:14.394 }, 00:14:14.394 { 00:14:14.394 "name": "BaseBdev2", 00:14:14.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.394 "is_configured": false, 00:14:14.394 "data_offset": 0, 00:14:14.394 "data_size": 0 00:14:14.394 }, 00:14:14.394 { 00:14:14.394 "name": "BaseBdev3", 00:14:14.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.394 "is_configured": false, 00:14:14.394 "data_offset": 0, 00:14:14.394 "data_size": 0 00:14:14.394 } 00:14:14.394 ] 00:14:14.394 }' 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.394 13:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.961 [2024-11-26 13:27:03.292634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.961 BaseBdev2 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.961 [ 00:14:14.961 { 00:14:14.961 "name": "BaseBdev2", 00:14:14.961 "aliases": [ 00:14:14.961 "1045677c-c55f-4046-8e41-e337d07f04ca" 00:14:14.961 ], 00:14:14.961 "product_name": "Malloc disk", 00:14:14.961 "block_size": 512, 00:14:14.961 "num_blocks": 65536, 00:14:14.961 "uuid": "1045677c-c55f-4046-8e41-e337d07f04ca", 00:14:14.961 "assigned_rate_limits": { 00:14:14.961 "rw_ios_per_sec": 0, 00:14:14.961 "rw_mbytes_per_sec": 0, 00:14:14.961 "r_mbytes_per_sec": 0, 00:14:14.961 "w_mbytes_per_sec": 0 00:14:14.961 }, 00:14:14.961 "claimed": true, 00:14:14.961 "claim_type": "exclusive_write", 00:14:14.961 "zoned": false, 00:14:14.961 "supported_io_types": { 00:14:14.961 "read": true, 00:14:14.961 "write": true, 00:14:14.961 "unmap": true, 00:14:14.961 "flush": true, 00:14:14.961 "reset": true, 00:14:14.961 "nvme_admin": false, 00:14:14.961 "nvme_io": false, 00:14:14.961 "nvme_io_md": false, 00:14:14.961 "write_zeroes": true, 00:14:14.961 "zcopy": true, 00:14:14.961 "get_zone_info": false, 00:14:14.961 "zone_management": false, 00:14:14.961 "zone_append": false, 00:14:14.961 "compare": false, 00:14:14.961 "compare_and_write": false, 00:14:14.961 "abort": true, 00:14:14.961 "seek_hole": false, 00:14:14.961 "seek_data": false, 00:14:14.961 "copy": true, 00:14:14.961 "nvme_iov_md": false 00:14:14.961 }, 00:14:14.961 "memory_domains": [ 00:14:14.961 { 00:14:14.961 "dma_device_id": "system", 00:14:14.961 "dma_device_type": 1 00:14:14.961 }, 00:14:14.961 { 00:14:14.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.961 "dma_device_type": 2 00:14:14.961 } 00:14:14.961 ], 00:14:14.961 "driver_specific": {} 00:14:14.961 } 00:14:14.961 ] 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.961 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.962 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.962 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.962 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.962 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.962 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.962 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.962 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.962 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.962 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.962 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.962 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.962 "name": "Existed_Raid", 00:14:14.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.962 "strip_size_kb": 64, 00:14:14.962 "state": "configuring", 00:14:14.962 "raid_level": "raid5f", 00:14:14.962 "superblock": false, 00:14:14.962 "num_base_bdevs": 3, 00:14:14.962 "num_base_bdevs_discovered": 2, 00:14:14.962 "num_base_bdevs_operational": 3, 00:14:14.962 "base_bdevs_list": [ 00:14:14.962 { 00:14:14.962 "name": "BaseBdev1", 00:14:14.962 "uuid": "e88f1618-4969-411d-b14c-22f56023f69b", 00:14:14.962 "is_configured": true, 00:14:14.962 "data_offset": 0, 00:14:14.962 "data_size": 65536 00:14:14.962 }, 00:14:14.962 { 00:14:14.962 "name": "BaseBdev2", 00:14:14.962 "uuid": "1045677c-c55f-4046-8e41-e337d07f04ca", 00:14:14.962 "is_configured": true, 00:14:14.962 "data_offset": 0, 00:14:14.962 "data_size": 65536 00:14:14.962 }, 00:14:14.962 { 00:14:14.962 "name": "BaseBdev3", 00:14:14.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.962 "is_configured": false, 00:14:14.962 "data_offset": 0, 00:14:14.962 "data_size": 0 00:14:14.962 } 00:14:14.962 ] 00:14:14.962 }' 00:14:14.962 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.962 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.549 [2024-11-26 13:27:03.889440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.549 [2024-11-26 13:27:03.889545] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:15.549 [2024-11-26 13:27:03.889569] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:15.549 [2024-11-26 13:27:03.889888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:15.549 [2024-11-26 13:27:03.894209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:15.549 [2024-11-26 13:27:03.894244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:15.549 [2024-11-26 13:27:03.894523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.549 BaseBdev3 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.549 [ 00:14:15.549 { 00:14:15.549 "name": "BaseBdev3", 00:14:15.549 "aliases": [ 00:14:15.549 "efc0fd41-806d-48c5-8e0c-385ac59b0e7a" 00:14:15.549 ], 00:14:15.549 "product_name": "Malloc disk", 00:14:15.549 "block_size": 512, 00:14:15.549 "num_blocks": 65536, 00:14:15.549 "uuid": "efc0fd41-806d-48c5-8e0c-385ac59b0e7a", 00:14:15.549 "assigned_rate_limits": { 00:14:15.549 "rw_ios_per_sec": 0, 00:14:15.549 "rw_mbytes_per_sec": 0, 00:14:15.549 "r_mbytes_per_sec": 0, 00:14:15.549 "w_mbytes_per_sec": 0 00:14:15.549 }, 00:14:15.549 "claimed": true, 00:14:15.549 "claim_type": "exclusive_write", 00:14:15.549 "zoned": false, 00:14:15.549 "supported_io_types": { 00:14:15.549 "read": true, 00:14:15.549 "write": true, 00:14:15.549 "unmap": true, 00:14:15.549 "flush": true, 00:14:15.549 "reset": true, 00:14:15.549 "nvme_admin": false, 00:14:15.549 "nvme_io": false, 00:14:15.549 "nvme_io_md": false, 00:14:15.549 "write_zeroes": true, 00:14:15.549 "zcopy": true, 00:14:15.549 "get_zone_info": false, 00:14:15.549 "zone_management": false, 00:14:15.549 "zone_append": false, 00:14:15.549 "compare": false, 00:14:15.549 "compare_and_write": false, 00:14:15.549 "abort": true, 00:14:15.549 "seek_hole": false, 00:14:15.549 "seek_data": false, 00:14:15.549 "copy": true, 00:14:15.549 "nvme_iov_md": false 00:14:15.549 }, 00:14:15.549 "memory_domains": [ 00:14:15.549 { 00:14:15.549 "dma_device_id": "system", 00:14:15.549 "dma_device_type": 1 00:14:15.549 }, 00:14:15.549 { 00:14:15.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.549 "dma_device_type": 2 00:14:15.549 } 00:14:15.549 ], 00:14:15.549 "driver_specific": {} 00:14:15.549 } 00:14:15.549 ] 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.549 "name": "Existed_Raid", 00:14:15.549 "uuid": "28f94ac5-f723-486c-ae6e-e2e72f79f789", 00:14:15.549 "strip_size_kb": 64, 00:14:15.549 "state": "online", 00:14:15.549 "raid_level": "raid5f", 00:14:15.549 "superblock": false, 00:14:15.549 "num_base_bdevs": 3, 00:14:15.549 "num_base_bdevs_discovered": 3, 00:14:15.549 "num_base_bdevs_operational": 3, 00:14:15.549 "base_bdevs_list": [ 00:14:15.549 { 00:14:15.549 "name": "BaseBdev1", 00:14:15.549 "uuid": "e88f1618-4969-411d-b14c-22f56023f69b", 00:14:15.549 "is_configured": true, 00:14:15.549 "data_offset": 0, 00:14:15.549 "data_size": 65536 00:14:15.549 }, 00:14:15.549 { 00:14:15.549 "name": "BaseBdev2", 00:14:15.549 "uuid": "1045677c-c55f-4046-8e41-e337d07f04ca", 00:14:15.549 "is_configured": true, 00:14:15.549 "data_offset": 0, 00:14:15.549 "data_size": 65536 00:14:15.549 }, 00:14:15.549 { 00:14:15.549 "name": "BaseBdev3", 00:14:15.549 "uuid": "efc0fd41-806d-48c5-8e0c-385ac59b0e7a", 00:14:15.549 "is_configured": true, 00:14:15.549 "data_offset": 0, 00:14:15.549 "data_size": 65536 00:14:15.549 } 00:14:15.549 ] 00:14:15.549 }' 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.549 13:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.118 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:16.118 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:16.118 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:16.118 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:16.118 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:16.118 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:16.118 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:16.118 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.118 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.118 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:16.118 [2024-11-26 13:27:04.455771] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.118 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.118 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:16.118 "name": "Existed_Raid", 00:14:16.118 "aliases": [ 00:14:16.118 "28f94ac5-f723-486c-ae6e-e2e72f79f789" 00:14:16.118 ], 00:14:16.118 "product_name": "Raid Volume", 00:14:16.118 "block_size": 512, 00:14:16.118 "num_blocks": 131072, 00:14:16.118 "uuid": "28f94ac5-f723-486c-ae6e-e2e72f79f789", 00:14:16.118 "assigned_rate_limits": { 00:14:16.118 "rw_ios_per_sec": 0, 00:14:16.118 "rw_mbytes_per_sec": 0, 00:14:16.118 "r_mbytes_per_sec": 0, 00:14:16.118 "w_mbytes_per_sec": 0 00:14:16.118 }, 00:14:16.118 "claimed": false, 00:14:16.118 "zoned": false, 00:14:16.118 "supported_io_types": { 00:14:16.118 "read": true, 00:14:16.118 "write": true, 00:14:16.118 "unmap": false, 00:14:16.118 "flush": false, 00:14:16.118 "reset": true, 00:14:16.118 "nvme_admin": false, 00:14:16.118 "nvme_io": false, 00:14:16.118 "nvme_io_md": false, 00:14:16.118 "write_zeroes": true, 00:14:16.118 "zcopy": false, 00:14:16.118 "get_zone_info": false, 00:14:16.118 "zone_management": false, 00:14:16.118 "zone_append": false, 00:14:16.118 "compare": false, 00:14:16.118 "compare_and_write": false, 00:14:16.118 "abort": false, 00:14:16.118 "seek_hole": false, 00:14:16.118 "seek_data": false, 00:14:16.118 "copy": false, 00:14:16.118 "nvme_iov_md": false 00:14:16.119 }, 00:14:16.119 "driver_specific": { 00:14:16.119 "raid": { 00:14:16.119 "uuid": "28f94ac5-f723-486c-ae6e-e2e72f79f789", 00:14:16.119 "strip_size_kb": 64, 00:14:16.119 "state": "online", 00:14:16.119 "raid_level": "raid5f", 00:14:16.119 "superblock": false, 00:14:16.119 "num_base_bdevs": 3, 00:14:16.119 "num_base_bdevs_discovered": 3, 00:14:16.119 "num_base_bdevs_operational": 3, 00:14:16.119 "base_bdevs_list": [ 00:14:16.119 { 00:14:16.119 "name": "BaseBdev1", 00:14:16.119 "uuid": "e88f1618-4969-411d-b14c-22f56023f69b", 00:14:16.119 "is_configured": true, 00:14:16.119 "data_offset": 0, 00:14:16.119 "data_size": 65536 00:14:16.119 }, 00:14:16.119 { 00:14:16.119 "name": "BaseBdev2", 00:14:16.119 "uuid": "1045677c-c55f-4046-8e41-e337d07f04ca", 00:14:16.119 "is_configured": true, 00:14:16.119 "data_offset": 0, 00:14:16.119 "data_size": 65536 00:14:16.119 }, 00:14:16.119 { 00:14:16.119 "name": "BaseBdev3", 00:14:16.119 "uuid": "efc0fd41-806d-48c5-8e0c-385ac59b0e7a", 00:14:16.119 "is_configured": true, 00:14:16.119 "data_offset": 0, 00:14:16.119 "data_size": 65536 00:14:16.119 } 00:14:16.119 ] 00:14:16.119 } 00:14:16.119 } 00:14:16.119 }' 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:16.119 BaseBdev2 00:14:16.119 BaseBdev3' 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.119 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.379 [2024-11-26 13:27:04.783700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.379 "name": "Existed_Raid", 00:14:16.379 "uuid": "28f94ac5-f723-486c-ae6e-e2e72f79f789", 00:14:16.379 "strip_size_kb": 64, 00:14:16.379 "state": "online", 00:14:16.379 "raid_level": "raid5f", 00:14:16.379 "superblock": false, 00:14:16.379 "num_base_bdevs": 3, 00:14:16.379 "num_base_bdevs_discovered": 2, 00:14:16.379 "num_base_bdevs_operational": 2, 00:14:16.379 "base_bdevs_list": [ 00:14:16.379 { 00:14:16.379 "name": null, 00:14:16.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.379 "is_configured": false, 00:14:16.379 "data_offset": 0, 00:14:16.379 "data_size": 65536 00:14:16.379 }, 00:14:16.379 { 00:14:16.379 "name": "BaseBdev2", 00:14:16.379 "uuid": "1045677c-c55f-4046-8e41-e337d07f04ca", 00:14:16.379 "is_configured": true, 00:14:16.379 "data_offset": 0, 00:14:16.379 "data_size": 65536 00:14:16.379 }, 00:14:16.379 { 00:14:16.379 "name": "BaseBdev3", 00:14:16.379 "uuid": "efc0fd41-806d-48c5-8e0c-385ac59b0e7a", 00:14:16.379 "is_configured": true, 00:14:16.379 "data_offset": 0, 00:14:16.379 "data_size": 65536 00:14:16.379 } 00:14:16.379 ] 00:14:16.379 }' 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.379 13:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.947 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:16.947 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:16.947 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.947 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:16.947 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.947 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.947 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.947 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:16.948 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:16.948 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:16.948 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.948 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.948 [2024-11-26 13:27:05.427718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:16.948 [2024-11-26 13:27:05.427837] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.948 [2024-11-26 13:27:05.495173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.948 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.948 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:16.948 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:16.948 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.948 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:16.948 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.948 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.209 [2024-11-26 13:27:05.555229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:17.209 [2024-11-26 13:27:05.555290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.209 BaseBdev2 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.209 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.209 [ 00:14:17.209 { 00:14:17.209 "name": "BaseBdev2", 00:14:17.209 "aliases": [ 00:14:17.209 "cfdb238c-c7ee-4b07-b428-422d21c624fa" 00:14:17.209 ], 00:14:17.209 "product_name": "Malloc disk", 00:14:17.209 "block_size": 512, 00:14:17.209 "num_blocks": 65536, 00:14:17.209 "uuid": "cfdb238c-c7ee-4b07-b428-422d21c624fa", 00:14:17.209 "assigned_rate_limits": { 00:14:17.209 "rw_ios_per_sec": 0, 00:14:17.209 "rw_mbytes_per_sec": 0, 00:14:17.209 "r_mbytes_per_sec": 0, 00:14:17.209 "w_mbytes_per_sec": 0 00:14:17.209 }, 00:14:17.209 "claimed": false, 00:14:17.209 "zoned": false, 00:14:17.209 "supported_io_types": { 00:14:17.209 "read": true, 00:14:17.209 "write": true, 00:14:17.209 "unmap": true, 00:14:17.209 "flush": true, 00:14:17.209 "reset": true, 00:14:17.209 "nvme_admin": false, 00:14:17.209 "nvme_io": false, 00:14:17.209 "nvme_io_md": false, 00:14:17.209 "write_zeroes": true, 00:14:17.209 "zcopy": true, 00:14:17.210 "get_zone_info": false, 00:14:17.210 "zone_management": false, 00:14:17.210 "zone_append": false, 00:14:17.210 "compare": false, 00:14:17.210 "compare_and_write": false, 00:14:17.210 "abort": true, 00:14:17.210 "seek_hole": false, 00:14:17.210 "seek_data": false, 00:14:17.210 "copy": true, 00:14:17.210 "nvme_iov_md": false 00:14:17.210 }, 00:14:17.210 "memory_domains": [ 00:14:17.210 { 00:14:17.210 "dma_device_id": "system", 00:14:17.210 "dma_device_type": 1 00:14:17.210 }, 00:14:17.210 { 00:14:17.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.210 "dma_device_type": 2 00:14:17.210 } 00:14:17.210 ], 00:14:17.210 "driver_specific": {} 00:14:17.210 } 00:14:17.210 ] 00:14:17.210 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.210 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:17.210 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:17.210 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:17.210 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:17.210 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.210 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.483 BaseBdev3 00:14:17.483 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.483 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:17.483 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:17.483 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:17.483 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:17.483 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:17.483 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:17.483 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:17.483 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.483 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.483 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.483 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:17.483 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.483 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.483 [ 00:14:17.483 { 00:14:17.483 "name": "BaseBdev3", 00:14:17.483 "aliases": [ 00:14:17.483 "f7061d1f-a6f2-4892-aeea-1cbfff016aca" 00:14:17.483 ], 00:14:17.483 "product_name": "Malloc disk", 00:14:17.483 "block_size": 512, 00:14:17.483 "num_blocks": 65536, 00:14:17.483 "uuid": "f7061d1f-a6f2-4892-aeea-1cbfff016aca", 00:14:17.483 "assigned_rate_limits": { 00:14:17.483 "rw_ios_per_sec": 0, 00:14:17.483 "rw_mbytes_per_sec": 0, 00:14:17.483 "r_mbytes_per_sec": 0, 00:14:17.483 "w_mbytes_per_sec": 0 00:14:17.483 }, 00:14:17.483 "claimed": false, 00:14:17.483 "zoned": false, 00:14:17.483 "supported_io_types": { 00:14:17.483 "read": true, 00:14:17.483 "write": true, 00:14:17.483 "unmap": true, 00:14:17.483 "flush": true, 00:14:17.483 "reset": true, 00:14:17.483 "nvme_admin": false, 00:14:17.483 "nvme_io": false, 00:14:17.483 "nvme_io_md": false, 00:14:17.483 "write_zeroes": true, 00:14:17.483 "zcopy": true, 00:14:17.484 "get_zone_info": false, 00:14:17.484 "zone_management": false, 00:14:17.484 "zone_append": false, 00:14:17.484 "compare": false, 00:14:17.484 "compare_and_write": false, 00:14:17.484 "abort": true, 00:14:17.484 "seek_hole": false, 00:14:17.484 "seek_data": false, 00:14:17.484 "copy": true, 00:14:17.484 "nvme_iov_md": false 00:14:17.484 }, 00:14:17.484 "memory_domains": [ 00:14:17.484 { 00:14:17.484 "dma_device_id": "system", 00:14:17.484 "dma_device_type": 1 00:14:17.484 }, 00:14:17.484 { 00:14:17.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.484 "dma_device_type": 2 00:14:17.484 } 00:14:17.484 ], 00:14:17.484 "driver_specific": {} 00:14:17.484 } 00:14:17.484 ] 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.484 [2024-11-26 13:27:05.823161] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:17.484 [2024-11-26 13:27:05.823501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:17.484 [2024-11-26 13:27:05.823544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:17.484 [2024-11-26 13:27:05.825661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.484 "name": "Existed_Raid", 00:14:17.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.484 "strip_size_kb": 64, 00:14:17.484 "state": "configuring", 00:14:17.484 "raid_level": "raid5f", 00:14:17.484 "superblock": false, 00:14:17.484 "num_base_bdevs": 3, 00:14:17.484 "num_base_bdevs_discovered": 2, 00:14:17.484 "num_base_bdevs_operational": 3, 00:14:17.484 "base_bdevs_list": [ 00:14:17.484 { 00:14:17.484 "name": "BaseBdev1", 00:14:17.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.484 "is_configured": false, 00:14:17.484 "data_offset": 0, 00:14:17.484 "data_size": 0 00:14:17.484 }, 00:14:17.484 { 00:14:17.484 "name": "BaseBdev2", 00:14:17.484 "uuid": "cfdb238c-c7ee-4b07-b428-422d21c624fa", 00:14:17.484 "is_configured": true, 00:14:17.484 "data_offset": 0, 00:14:17.484 "data_size": 65536 00:14:17.484 }, 00:14:17.484 { 00:14:17.484 "name": "BaseBdev3", 00:14:17.484 "uuid": "f7061d1f-a6f2-4892-aeea-1cbfff016aca", 00:14:17.484 "is_configured": true, 00:14:17.484 "data_offset": 0, 00:14:17.484 "data_size": 65536 00:14:17.484 } 00:14:17.484 ] 00:14:17.484 }' 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.484 13:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.108 [2024-11-26 13:27:06.359222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.108 "name": "Existed_Raid", 00:14:18.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.108 "strip_size_kb": 64, 00:14:18.108 "state": "configuring", 00:14:18.108 "raid_level": "raid5f", 00:14:18.108 "superblock": false, 00:14:18.108 "num_base_bdevs": 3, 00:14:18.108 "num_base_bdevs_discovered": 1, 00:14:18.108 "num_base_bdevs_operational": 3, 00:14:18.108 "base_bdevs_list": [ 00:14:18.108 { 00:14:18.108 "name": "BaseBdev1", 00:14:18.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.108 "is_configured": false, 00:14:18.108 "data_offset": 0, 00:14:18.108 "data_size": 0 00:14:18.108 }, 00:14:18.108 { 00:14:18.108 "name": null, 00:14:18.108 "uuid": "cfdb238c-c7ee-4b07-b428-422d21c624fa", 00:14:18.108 "is_configured": false, 00:14:18.108 "data_offset": 0, 00:14:18.108 "data_size": 65536 00:14:18.108 }, 00:14:18.108 { 00:14:18.108 "name": "BaseBdev3", 00:14:18.108 "uuid": "f7061d1f-a6f2-4892-aeea-1cbfff016aca", 00:14:18.108 "is_configured": true, 00:14:18.108 "data_offset": 0, 00:14:18.108 "data_size": 65536 00:14:18.108 } 00:14:18.108 ] 00:14:18.108 }' 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.108 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.367 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.367 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.367 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:18.367 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.367 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.627 [2024-11-26 13:27:06.974549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:18.627 BaseBdev1 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.627 13:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.627 [ 00:14:18.627 { 00:14:18.627 "name": "BaseBdev1", 00:14:18.627 "aliases": [ 00:14:18.627 "abc5473b-3c65-48a9-ac32-2e840734f86d" 00:14:18.627 ], 00:14:18.627 "product_name": "Malloc disk", 00:14:18.627 "block_size": 512, 00:14:18.627 "num_blocks": 65536, 00:14:18.627 "uuid": "abc5473b-3c65-48a9-ac32-2e840734f86d", 00:14:18.627 "assigned_rate_limits": { 00:14:18.627 "rw_ios_per_sec": 0, 00:14:18.627 "rw_mbytes_per_sec": 0, 00:14:18.627 "r_mbytes_per_sec": 0, 00:14:18.627 "w_mbytes_per_sec": 0 00:14:18.627 }, 00:14:18.627 "claimed": true, 00:14:18.627 "claim_type": "exclusive_write", 00:14:18.627 "zoned": false, 00:14:18.627 "supported_io_types": { 00:14:18.627 "read": true, 00:14:18.627 "write": true, 00:14:18.627 "unmap": true, 00:14:18.627 "flush": true, 00:14:18.627 "reset": true, 00:14:18.627 "nvme_admin": false, 00:14:18.627 "nvme_io": false, 00:14:18.627 "nvme_io_md": false, 00:14:18.627 "write_zeroes": true, 00:14:18.627 "zcopy": true, 00:14:18.627 "get_zone_info": false, 00:14:18.627 "zone_management": false, 00:14:18.627 "zone_append": false, 00:14:18.627 "compare": false, 00:14:18.627 "compare_and_write": false, 00:14:18.627 "abort": true, 00:14:18.627 "seek_hole": false, 00:14:18.627 "seek_data": false, 00:14:18.627 "copy": true, 00:14:18.627 "nvme_iov_md": false 00:14:18.627 }, 00:14:18.627 "memory_domains": [ 00:14:18.627 { 00:14:18.627 "dma_device_id": "system", 00:14:18.627 "dma_device_type": 1 00:14:18.627 }, 00:14:18.627 { 00:14:18.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.627 "dma_device_type": 2 00:14:18.627 } 00:14:18.627 ], 00:14:18.627 "driver_specific": {} 00:14:18.627 } 00:14:18.627 ] 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.627 "name": "Existed_Raid", 00:14:18.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.627 "strip_size_kb": 64, 00:14:18.627 "state": "configuring", 00:14:18.627 "raid_level": "raid5f", 00:14:18.627 "superblock": false, 00:14:18.627 "num_base_bdevs": 3, 00:14:18.627 "num_base_bdevs_discovered": 2, 00:14:18.627 "num_base_bdevs_operational": 3, 00:14:18.627 "base_bdevs_list": [ 00:14:18.627 { 00:14:18.627 "name": "BaseBdev1", 00:14:18.627 "uuid": "abc5473b-3c65-48a9-ac32-2e840734f86d", 00:14:18.627 "is_configured": true, 00:14:18.627 "data_offset": 0, 00:14:18.627 "data_size": 65536 00:14:18.627 }, 00:14:18.627 { 00:14:18.627 "name": null, 00:14:18.627 "uuid": "cfdb238c-c7ee-4b07-b428-422d21c624fa", 00:14:18.627 "is_configured": false, 00:14:18.627 "data_offset": 0, 00:14:18.627 "data_size": 65536 00:14:18.627 }, 00:14:18.627 { 00:14:18.627 "name": "BaseBdev3", 00:14:18.627 "uuid": "f7061d1f-a6f2-4892-aeea-1cbfff016aca", 00:14:18.627 "is_configured": true, 00:14:18.627 "data_offset": 0, 00:14:18.627 "data_size": 65536 00:14:18.627 } 00:14:18.627 ] 00:14:18.627 }' 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.627 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.195 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.195 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.195 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.195 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:19.195 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.195 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:19.195 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.196 [2024-11-26 13:27:07.590744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.196 "name": "Existed_Raid", 00:14:19.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.196 "strip_size_kb": 64, 00:14:19.196 "state": "configuring", 00:14:19.196 "raid_level": "raid5f", 00:14:19.196 "superblock": false, 00:14:19.196 "num_base_bdevs": 3, 00:14:19.196 "num_base_bdevs_discovered": 1, 00:14:19.196 "num_base_bdevs_operational": 3, 00:14:19.196 "base_bdevs_list": [ 00:14:19.196 { 00:14:19.196 "name": "BaseBdev1", 00:14:19.196 "uuid": "abc5473b-3c65-48a9-ac32-2e840734f86d", 00:14:19.196 "is_configured": true, 00:14:19.196 "data_offset": 0, 00:14:19.196 "data_size": 65536 00:14:19.196 }, 00:14:19.196 { 00:14:19.196 "name": null, 00:14:19.196 "uuid": "cfdb238c-c7ee-4b07-b428-422d21c624fa", 00:14:19.196 "is_configured": false, 00:14:19.196 "data_offset": 0, 00:14:19.196 "data_size": 65536 00:14:19.196 }, 00:14:19.196 { 00:14:19.196 "name": null, 00:14:19.196 "uuid": "f7061d1f-a6f2-4892-aeea-1cbfff016aca", 00:14:19.196 "is_configured": false, 00:14:19.196 "data_offset": 0, 00:14:19.196 "data_size": 65536 00:14:19.196 } 00:14:19.196 ] 00:14:19.196 }' 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.196 13:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.765 [2024-11-26 13:27:08.162902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.765 "name": "Existed_Raid", 00:14:19.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.765 "strip_size_kb": 64, 00:14:19.765 "state": "configuring", 00:14:19.765 "raid_level": "raid5f", 00:14:19.765 "superblock": false, 00:14:19.765 "num_base_bdevs": 3, 00:14:19.765 "num_base_bdevs_discovered": 2, 00:14:19.765 "num_base_bdevs_operational": 3, 00:14:19.765 "base_bdevs_list": [ 00:14:19.765 { 00:14:19.765 "name": "BaseBdev1", 00:14:19.765 "uuid": "abc5473b-3c65-48a9-ac32-2e840734f86d", 00:14:19.765 "is_configured": true, 00:14:19.765 "data_offset": 0, 00:14:19.765 "data_size": 65536 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "name": null, 00:14:19.765 "uuid": "cfdb238c-c7ee-4b07-b428-422d21c624fa", 00:14:19.765 "is_configured": false, 00:14:19.765 "data_offset": 0, 00:14:19.765 "data_size": 65536 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "name": "BaseBdev3", 00:14:19.765 "uuid": "f7061d1f-a6f2-4892-aeea-1cbfff016aca", 00:14:19.765 "is_configured": true, 00:14:19.765 "data_offset": 0, 00:14:19.765 "data_size": 65536 00:14:19.765 } 00:14:19.765 ] 00:14:19.765 }' 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.765 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.334 [2024-11-26 13:27:08.759099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.334 "name": "Existed_Raid", 00:14:20.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.334 "strip_size_kb": 64, 00:14:20.334 "state": "configuring", 00:14:20.334 "raid_level": "raid5f", 00:14:20.334 "superblock": false, 00:14:20.334 "num_base_bdevs": 3, 00:14:20.334 "num_base_bdevs_discovered": 1, 00:14:20.334 "num_base_bdevs_operational": 3, 00:14:20.334 "base_bdevs_list": [ 00:14:20.334 { 00:14:20.334 "name": null, 00:14:20.334 "uuid": "abc5473b-3c65-48a9-ac32-2e840734f86d", 00:14:20.334 "is_configured": false, 00:14:20.334 "data_offset": 0, 00:14:20.334 "data_size": 65536 00:14:20.334 }, 00:14:20.334 { 00:14:20.334 "name": null, 00:14:20.334 "uuid": "cfdb238c-c7ee-4b07-b428-422d21c624fa", 00:14:20.334 "is_configured": false, 00:14:20.334 "data_offset": 0, 00:14:20.334 "data_size": 65536 00:14:20.334 }, 00:14:20.334 { 00:14:20.334 "name": "BaseBdev3", 00:14:20.334 "uuid": "f7061d1f-a6f2-4892-aeea-1cbfff016aca", 00:14:20.334 "is_configured": true, 00:14:20.334 "data_offset": 0, 00:14:20.334 "data_size": 65536 00:14:20.334 } 00:14:20.334 ] 00:14:20.334 }' 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.334 13:27:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.902 [2024-11-26 13:27:09.385126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.902 "name": "Existed_Raid", 00:14:20.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.902 "strip_size_kb": 64, 00:14:20.902 "state": "configuring", 00:14:20.902 "raid_level": "raid5f", 00:14:20.902 "superblock": false, 00:14:20.902 "num_base_bdevs": 3, 00:14:20.902 "num_base_bdevs_discovered": 2, 00:14:20.902 "num_base_bdevs_operational": 3, 00:14:20.902 "base_bdevs_list": [ 00:14:20.902 { 00:14:20.902 "name": null, 00:14:20.902 "uuid": "abc5473b-3c65-48a9-ac32-2e840734f86d", 00:14:20.902 "is_configured": false, 00:14:20.902 "data_offset": 0, 00:14:20.902 "data_size": 65536 00:14:20.902 }, 00:14:20.902 { 00:14:20.902 "name": "BaseBdev2", 00:14:20.902 "uuid": "cfdb238c-c7ee-4b07-b428-422d21c624fa", 00:14:20.902 "is_configured": true, 00:14:20.902 "data_offset": 0, 00:14:20.902 "data_size": 65536 00:14:20.902 }, 00:14:20.902 { 00:14:20.902 "name": "BaseBdev3", 00:14:20.902 "uuid": "f7061d1f-a6f2-4892-aeea-1cbfff016aca", 00:14:20.902 "is_configured": true, 00:14:20.902 "data_offset": 0, 00:14:20.902 "data_size": 65536 00:14:20.902 } 00:14:20.902 ] 00:14:20.902 }' 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.902 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.470 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.470 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.470 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.470 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:21.470 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.470 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:21.470 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.470 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:21.470 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.470 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.470 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.470 13:27:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u abc5473b-3c65-48a9-ac32-2e840734f86d 00:14:21.470 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.470 13:27:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.470 [2024-11-26 13:27:10.032169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:21.470 [2024-11-26 13:27:10.032216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:21.470 [2024-11-26 13:27:10.032231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:21.470 [2024-11-26 13:27:10.032526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:21.730 [2024-11-26 13:27:10.036790] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:21.730 [2024-11-26 13:27:10.036813] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:21.730 [2024-11-26 13:27:10.037091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.730 NewBaseBdev 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.730 [ 00:14:21.730 { 00:14:21.730 "name": "NewBaseBdev", 00:14:21.730 "aliases": [ 00:14:21.730 "abc5473b-3c65-48a9-ac32-2e840734f86d" 00:14:21.730 ], 00:14:21.730 "product_name": "Malloc disk", 00:14:21.730 "block_size": 512, 00:14:21.730 "num_blocks": 65536, 00:14:21.730 "uuid": "abc5473b-3c65-48a9-ac32-2e840734f86d", 00:14:21.730 "assigned_rate_limits": { 00:14:21.730 "rw_ios_per_sec": 0, 00:14:21.730 "rw_mbytes_per_sec": 0, 00:14:21.730 "r_mbytes_per_sec": 0, 00:14:21.730 "w_mbytes_per_sec": 0 00:14:21.730 }, 00:14:21.730 "claimed": true, 00:14:21.730 "claim_type": "exclusive_write", 00:14:21.730 "zoned": false, 00:14:21.730 "supported_io_types": { 00:14:21.730 "read": true, 00:14:21.730 "write": true, 00:14:21.730 "unmap": true, 00:14:21.730 "flush": true, 00:14:21.730 "reset": true, 00:14:21.730 "nvme_admin": false, 00:14:21.730 "nvme_io": false, 00:14:21.730 "nvme_io_md": false, 00:14:21.730 "write_zeroes": true, 00:14:21.730 "zcopy": true, 00:14:21.730 "get_zone_info": false, 00:14:21.730 "zone_management": false, 00:14:21.730 "zone_append": false, 00:14:21.730 "compare": false, 00:14:21.730 "compare_and_write": false, 00:14:21.730 "abort": true, 00:14:21.730 "seek_hole": false, 00:14:21.730 "seek_data": false, 00:14:21.730 "copy": true, 00:14:21.730 "nvme_iov_md": false 00:14:21.730 }, 00:14:21.730 "memory_domains": [ 00:14:21.730 { 00:14:21.730 "dma_device_id": "system", 00:14:21.730 "dma_device_type": 1 00:14:21.730 }, 00:14:21.730 { 00:14:21.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.730 "dma_device_type": 2 00:14:21.730 } 00:14:21.730 ], 00:14:21.730 "driver_specific": {} 00:14:21.730 } 00:14:21.730 ] 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.730 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.730 "name": "Existed_Raid", 00:14:21.730 "uuid": "56512e55-6993-480d-a27f-20a9015a7445", 00:14:21.730 "strip_size_kb": 64, 00:14:21.730 "state": "online", 00:14:21.730 "raid_level": "raid5f", 00:14:21.730 "superblock": false, 00:14:21.730 "num_base_bdevs": 3, 00:14:21.730 "num_base_bdevs_discovered": 3, 00:14:21.730 "num_base_bdevs_operational": 3, 00:14:21.730 "base_bdevs_list": [ 00:14:21.730 { 00:14:21.730 "name": "NewBaseBdev", 00:14:21.730 "uuid": "abc5473b-3c65-48a9-ac32-2e840734f86d", 00:14:21.730 "is_configured": true, 00:14:21.730 "data_offset": 0, 00:14:21.730 "data_size": 65536 00:14:21.730 }, 00:14:21.730 { 00:14:21.730 "name": "BaseBdev2", 00:14:21.730 "uuid": "cfdb238c-c7ee-4b07-b428-422d21c624fa", 00:14:21.730 "is_configured": true, 00:14:21.730 "data_offset": 0, 00:14:21.730 "data_size": 65536 00:14:21.730 }, 00:14:21.730 { 00:14:21.731 "name": "BaseBdev3", 00:14:21.731 "uuid": "f7061d1f-a6f2-4892-aeea-1cbfff016aca", 00:14:21.731 "is_configured": true, 00:14:21.731 "data_offset": 0, 00:14:21.731 "data_size": 65536 00:14:21.731 } 00:14:21.731 ] 00:14:21.731 }' 00:14:21.731 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.731 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.300 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:22.300 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:22.300 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:22.300 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:22.300 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:22.300 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:22.300 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:22.300 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:22.300 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.300 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.300 [2024-11-26 13:27:10.606499] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.300 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.300 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:22.300 "name": "Existed_Raid", 00:14:22.300 "aliases": [ 00:14:22.300 "56512e55-6993-480d-a27f-20a9015a7445" 00:14:22.300 ], 00:14:22.300 "product_name": "Raid Volume", 00:14:22.300 "block_size": 512, 00:14:22.300 "num_blocks": 131072, 00:14:22.300 "uuid": "56512e55-6993-480d-a27f-20a9015a7445", 00:14:22.300 "assigned_rate_limits": { 00:14:22.300 "rw_ios_per_sec": 0, 00:14:22.301 "rw_mbytes_per_sec": 0, 00:14:22.301 "r_mbytes_per_sec": 0, 00:14:22.301 "w_mbytes_per_sec": 0 00:14:22.301 }, 00:14:22.301 "claimed": false, 00:14:22.301 "zoned": false, 00:14:22.301 "supported_io_types": { 00:14:22.301 "read": true, 00:14:22.301 "write": true, 00:14:22.301 "unmap": false, 00:14:22.301 "flush": false, 00:14:22.301 "reset": true, 00:14:22.301 "nvme_admin": false, 00:14:22.301 "nvme_io": false, 00:14:22.301 "nvme_io_md": false, 00:14:22.301 "write_zeroes": true, 00:14:22.301 "zcopy": false, 00:14:22.301 "get_zone_info": false, 00:14:22.301 "zone_management": false, 00:14:22.301 "zone_append": false, 00:14:22.301 "compare": false, 00:14:22.301 "compare_and_write": false, 00:14:22.301 "abort": false, 00:14:22.301 "seek_hole": false, 00:14:22.301 "seek_data": false, 00:14:22.301 "copy": false, 00:14:22.301 "nvme_iov_md": false 00:14:22.301 }, 00:14:22.301 "driver_specific": { 00:14:22.301 "raid": { 00:14:22.301 "uuid": "56512e55-6993-480d-a27f-20a9015a7445", 00:14:22.301 "strip_size_kb": 64, 00:14:22.301 "state": "online", 00:14:22.301 "raid_level": "raid5f", 00:14:22.301 "superblock": false, 00:14:22.301 "num_base_bdevs": 3, 00:14:22.301 "num_base_bdevs_discovered": 3, 00:14:22.301 "num_base_bdevs_operational": 3, 00:14:22.301 "base_bdevs_list": [ 00:14:22.301 { 00:14:22.301 "name": "NewBaseBdev", 00:14:22.301 "uuid": "abc5473b-3c65-48a9-ac32-2e840734f86d", 00:14:22.301 "is_configured": true, 00:14:22.301 "data_offset": 0, 00:14:22.301 "data_size": 65536 00:14:22.301 }, 00:14:22.301 { 00:14:22.301 "name": "BaseBdev2", 00:14:22.301 "uuid": "cfdb238c-c7ee-4b07-b428-422d21c624fa", 00:14:22.301 "is_configured": true, 00:14:22.301 "data_offset": 0, 00:14:22.301 "data_size": 65536 00:14:22.301 }, 00:14:22.301 { 00:14:22.301 "name": "BaseBdev3", 00:14:22.301 "uuid": "f7061d1f-a6f2-4892-aeea-1cbfff016aca", 00:14:22.301 "is_configured": true, 00:14:22.301 "data_offset": 0, 00:14:22.301 "data_size": 65536 00:14:22.301 } 00:14:22.301 ] 00:14:22.301 } 00:14:22.301 } 00:14:22.301 }' 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:22.301 BaseBdev2 00:14:22.301 BaseBdev3' 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.301 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.560 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:22.560 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:22.560 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.560 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:22.560 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.560 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.560 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.560 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.561 [2024-11-26 13:27:10.922402] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:22.561 [2024-11-26 13:27:10.922428] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.561 [2024-11-26 13:27:10.922496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.561 [2024-11-26 13:27:10.922811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.561 [2024-11-26 13:27:10.922843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79503 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79503 ']' 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79503 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79503 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79503' 00:14:22.561 killing process with pid 79503 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79503 00:14:22.561 [2024-11-26 13:27:10.959335] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:22.561 13:27:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79503 00:14:22.820 [2024-11-26 13:27:11.170725] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:23.757 00:14:23.757 real 0m11.595s 00:14:23.757 user 0m19.529s 00:14:23.757 sys 0m1.596s 00:14:23.757 ************************************ 00:14:23.757 END TEST raid5f_state_function_test 00:14:23.757 ************************************ 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.757 13:27:12 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:23.757 13:27:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:23.757 13:27:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.757 13:27:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:23.757 ************************************ 00:14:23.757 START TEST raid5f_state_function_test_sb 00:14:23.757 ************************************ 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:23.757 Process raid pid: 80131 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80131 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80131' 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80131 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80131 ']' 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.757 13:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.757 [2024-11-26 13:27:12.270610] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:14:23.757 [2024-11-26 13:27:12.270995] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.017 [2024-11-26 13:27:12.449421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.017 [2024-11-26 13:27:12.560062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.276 [2024-11-26 13:27:12.751383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.276 [2024-11-26 13:27:12.751680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.844 [2024-11-26 13:27:13.237423] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:24.844 [2024-11-26 13:27:13.237497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:24.844 [2024-11-26 13:27:13.237513] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:24.844 [2024-11-26 13:27:13.237528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:24.844 [2024-11-26 13:27:13.237537] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:24.844 [2024-11-26 13:27:13.237566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.844 "name": "Existed_Raid", 00:14:24.844 "uuid": "e39532bf-27cf-4d48-8311-5f5c5f94598c", 00:14:24.844 "strip_size_kb": 64, 00:14:24.844 "state": "configuring", 00:14:24.844 "raid_level": "raid5f", 00:14:24.844 "superblock": true, 00:14:24.844 "num_base_bdevs": 3, 00:14:24.844 "num_base_bdevs_discovered": 0, 00:14:24.844 "num_base_bdevs_operational": 3, 00:14:24.844 "base_bdevs_list": [ 00:14:24.844 { 00:14:24.844 "name": "BaseBdev1", 00:14:24.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.844 "is_configured": false, 00:14:24.844 "data_offset": 0, 00:14:24.844 "data_size": 0 00:14:24.844 }, 00:14:24.844 { 00:14:24.844 "name": "BaseBdev2", 00:14:24.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.844 "is_configured": false, 00:14:24.844 "data_offset": 0, 00:14:24.844 "data_size": 0 00:14:24.844 }, 00:14:24.844 { 00:14:24.844 "name": "BaseBdev3", 00:14:24.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.844 "is_configured": false, 00:14:24.844 "data_offset": 0, 00:14:24.844 "data_size": 0 00:14:24.844 } 00:14:24.844 ] 00:14:24.844 }' 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.844 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.412 [2024-11-26 13:27:13.737487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.412 [2024-11-26 13:27:13.737523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.412 [2024-11-26 13:27:13.745509] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:25.412 [2024-11-26 13:27:13.745726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:25.412 [2024-11-26 13:27:13.745749] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:25.412 [2024-11-26 13:27:13.745765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:25.412 [2024-11-26 13:27:13.745774] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:25.412 [2024-11-26 13:27:13.745787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.412 [2024-11-26 13:27:13.784043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.412 BaseBdev1 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.412 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.412 [ 00:14:25.412 { 00:14:25.412 "name": "BaseBdev1", 00:14:25.412 "aliases": [ 00:14:25.412 "f7693a33-940e-4f08-8cc7-be82c20f77a9" 00:14:25.412 ], 00:14:25.412 "product_name": "Malloc disk", 00:14:25.412 "block_size": 512, 00:14:25.412 "num_blocks": 65536, 00:14:25.412 "uuid": "f7693a33-940e-4f08-8cc7-be82c20f77a9", 00:14:25.412 "assigned_rate_limits": { 00:14:25.412 "rw_ios_per_sec": 0, 00:14:25.413 "rw_mbytes_per_sec": 0, 00:14:25.413 "r_mbytes_per_sec": 0, 00:14:25.413 "w_mbytes_per_sec": 0 00:14:25.413 }, 00:14:25.413 "claimed": true, 00:14:25.413 "claim_type": "exclusive_write", 00:14:25.413 "zoned": false, 00:14:25.413 "supported_io_types": { 00:14:25.413 "read": true, 00:14:25.413 "write": true, 00:14:25.413 "unmap": true, 00:14:25.413 "flush": true, 00:14:25.413 "reset": true, 00:14:25.413 "nvme_admin": false, 00:14:25.413 "nvme_io": false, 00:14:25.413 "nvme_io_md": false, 00:14:25.413 "write_zeroes": true, 00:14:25.413 "zcopy": true, 00:14:25.413 "get_zone_info": false, 00:14:25.413 "zone_management": false, 00:14:25.413 "zone_append": false, 00:14:25.413 "compare": false, 00:14:25.413 "compare_and_write": false, 00:14:25.413 "abort": true, 00:14:25.413 "seek_hole": false, 00:14:25.413 "seek_data": false, 00:14:25.413 "copy": true, 00:14:25.413 "nvme_iov_md": false 00:14:25.413 }, 00:14:25.413 "memory_domains": [ 00:14:25.413 { 00:14:25.413 "dma_device_id": "system", 00:14:25.413 "dma_device_type": 1 00:14:25.413 }, 00:14:25.413 { 00:14:25.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.413 "dma_device_type": 2 00:14:25.413 } 00:14:25.413 ], 00:14:25.413 "driver_specific": {} 00:14:25.413 } 00:14:25.413 ] 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.413 "name": "Existed_Raid", 00:14:25.413 "uuid": "43a0139b-f4af-44d0-8488-894d305ef0b0", 00:14:25.413 "strip_size_kb": 64, 00:14:25.413 "state": "configuring", 00:14:25.413 "raid_level": "raid5f", 00:14:25.413 "superblock": true, 00:14:25.413 "num_base_bdevs": 3, 00:14:25.413 "num_base_bdevs_discovered": 1, 00:14:25.413 "num_base_bdevs_operational": 3, 00:14:25.413 "base_bdevs_list": [ 00:14:25.413 { 00:14:25.413 "name": "BaseBdev1", 00:14:25.413 "uuid": "f7693a33-940e-4f08-8cc7-be82c20f77a9", 00:14:25.413 "is_configured": true, 00:14:25.413 "data_offset": 2048, 00:14:25.413 "data_size": 63488 00:14:25.413 }, 00:14:25.413 { 00:14:25.413 "name": "BaseBdev2", 00:14:25.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.413 "is_configured": false, 00:14:25.413 "data_offset": 0, 00:14:25.413 "data_size": 0 00:14:25.413 }, 00:14:25.413 { 00:14:25.413 "name": "BaseBdev3", 00:14:25.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.413 "is_configured": false, 00:14:25.413 "data_offset": 0, 00:14:25.413 "data_size": 0 00:14:25.413 } 00:14:25.413 ] 00:14:25.413 }' 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.413 13:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.982 [2024-11-26 13:27:14.336196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.982 [2024-11-26 13:27:14.336250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.982 [2024-11-26 13:27:14.348304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.982 [2024-11-26 13:27:14.350623] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:25.982 [2024-11-26 13:27:14.350687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:25.982 [2024-11-26 13:27:14.350702] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:25.982 [2024-11-26 13:27:14.350715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.982 "name": "Existed_Raid", 00:14:25.982 "uuid": "ce037b69-f218-494f-a4f1-0a5260234692", 00:14:25.982 "strip_size_kb": 64, 00:14:25.982 "state": "configuring", 00:14:25.982 "raid_level": "raid5f", 00:14:25.982 "superblock": true, 00:14:25.982 "num_base_bdevs": 3, 00:14:25.982 "num_base_bdevs_discovered": 1, 00:14:25.982 "num_base_bdevs_operational": 3, 00:14:25.982 "base_bdevs_list": [ 00:14:25.982 { 00:14:25.982 "name": "BaseBdev1", 00:14:25.982 "uuid": "f7693a33-940e-4f08-8cc7-be82c20f77a9", 00:14:25.982 "is_configured": true, 00:14:25.982 "data_offset": 2048, 00:14:25.982 "data_size": 63488 00:14:25.982 }, 00:14:25.982 { 00:14:25.982 "name": "BaseBdev2", 00:14:25.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.982 "is_configured": false, 00:14:25.982 "data_offset": 0, 00:14:25.982 "data_size": 0 00:14:25.982 }, 00:14:25.982 { 00:14:25.982 "name": "BaseBdev3", 00:14:25.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.982 "is_configured": false, 00:14:25.982 "data_offset": 0, 00:14:25.982 "data_size": 0 00:14:25.982 } 00:14:25.982 ] 00:14:25.982 }' 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.982 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.557 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:26.557 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.557 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.557 [2024-11-26 13:27:14.896368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.557 BaseBdev2 00:14:26.557 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.558 [ 00:14:26.558 { 00:14:26.558 "name": "BaseBdev2", 00:14:26.558 "aliases": [ 00:14:26.558 "7baea23a-1c41-4c31-97d3-3dddf4589c65" 00:14:26.558 ], 00:14:26.558 "product_name": "Malloc disk", 00:14:26.558 "block_size": 512, 00:14:26.558 "num_blocks": 65536, 00:14:26.558 "uuid": "7baea23a-1c41-4c31-97d3-3dddf4589c65", 00:14:26.558 "assigned_rate_limits": { 00:14:26.558 "rw_ios_per_sec": 0, 00:14:26.558 "rw_mbytes_per_sec": 0, 00:14:26.558 "r_mbytes_per_sec": 0, 00:14:26.558 "w_mbytes_per_sec": 0 00:14:26.558 }, 00:14:26.558 "claimed": true, 00:14:26.558 "claim_type": "exclusive_write", 00:14:26.558 "zoned": false, 00:14:26.558 "supported_io_types": { 00:14:26.558 "read": true, 00:14:26.558 "write": true, 00:14:26.558 "unmap": true, 00:14:26.558 "flush": true, 00:14:26.558 "reset": true, 00:14:26.558 "nvme_admin": false, 00:14:26.558 "nvme_io": false, 00:14:26.558 "nvme_io_md": false, 00:14:26.558 "write_zeroes": true, 00:14:26.558 "zcopy": true, 00:14:26.558 "get_zone_info": false, 00:14:26.558 "zone_management": false, 00:14:26.558 "zone_append": false, 00:14:26.558 "compare": false, 00:14:26.558 "compare_and_write": false, 00:14:26.558 "abort": true, 00:14:26.558 "seek_hole": false, 00:14:26.558 "seek_data": false, 00:14:26.558 "copy": true, 00:14:26.558 "nvme_iov_md": false 00:14:26.558 }, 00:14:26.558 "memory_domains": [ 00:14:26.558 { 00:14:26.558 "dma_device_id": "system", 00:14:26.558 "dma_device_type": 1 00:14:26.558 }, 00:14:26.558 { 00:14:26.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.558 "dma_device_type": 2 00:14:26.558 } 00:14:26.558 ], 00:14:26.558 "driver_specific": {} 00:14:26.558 } 00:14:26.558 ] 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.558 "name": "Existed_Raid", 00:14:26.558 "uuid": "ce037b69-f218-494f-a4f1-0a5260234692", 00:14:26.558 "strip_size_kb": 64, 00:14:26.558 "state": "configuring", 00:14:26.558 "raid_level": "raid5f", 00:14:26.558 "superblock": true, 00:14:26.558 "num_base_bdevs": 3, 00:14:26.558 "num_base_bdevs_discovered": 2, 00:14:26.558 "num_base_bdevs_operational": 3, 00:14:26.558 "base_bdevs_list": [ 00:14:26.558 { 00:14:26.558 "name": "BaseBdev1", 00:14:26.558 "uuid": "f7693a33-940e-4f08-8cc7-be82c20f77a9", 00:14:26.558 "is_configured": true, 00:14:26.558 "data_offset": 2048, 00:14:26.558 "data_size": 63488 00:14:26.558 }, 00:14:26.558 { 00:14:26.558 "name": "BaseBdev2", 00:14:26.558 "uuid": "7baea23a-1c41-4c31-97d3-3dddf4589c65", 00:14:26.558 "is_configured": true, 00:14:26.558 "data_offset": 2048, 00:14:26.558 "data_size": 63488 00:14:26.558 }, 00:14:26.558 { 00:14:26.558 "name": "BaseBdev3", 00:14:26.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.558 "is_configured": false, 00:14:26.558 "data_offset": 0, 00:14:26.558 "data_size": 0 00:14:26.558 } 00:14:26.558 ] 00:14:26.558 }' 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.558 13:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.126 [2024-11-26 13:27:15.519559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:27.126 [2024-11-26 13:27:15.520011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:27.126 [2024-11-26 13:27:15.520046] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:27.126 BaseBdev3 00:14:27.126 [2024-11-26 13:27:15.520411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.126 [2024-11-26 13:27:15.525043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:27.126 [2024-11-26 13:27:15.525067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:27.126 [2024-11-26 13:27:15.525371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.126 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.126 [ 00:14:27.126 { 00:14:27.126 "name": "BaseBdev3", 00:14:27.126 "aliases": [ 00:14:27.126 "355e6f8e-1c26-495f-870c-9b3765f95057" 00:14:27.126 ], 00:14:27.126 "product_name": "Malloc disk", 00:14:27.126 "block_size": 512, 00:14:27.126 "num_blocks": 65536, 00:14:27.126 "uuid": "355e6f8e-1c26-495f-870c-9b3765f95057", 00:14:27.126 "assigned_rate_limits": { 00:14:27.126 "rw_ios_per_sec": 0, 00:14:27.126 "rw_mbytes_per_sec": 0, 00:14:27.126 "r_mbytes_per_sec": 0, 00:14:27.126 "w_mbytes_per_sec": 0 00:14:27.126 }, 00:14:27.126 "claimed": true, 00:14:27.126 "claim_type": "exclusive_write", 00:14:27.126 "zoned": false, 00:14:27.126 "supported_io_types": { 00:14:27.126 "read": true, 00:14:27.126 "write": true, 00:14:27.126 "unmap": true, 00:14:27.126 "flush": true, 00:14:27.126 "reset": true, 00:14:27.126 "nvme_admin": false, 00:14:27.126 "nvme_io": false, 00:14:27.126 "nvme_io_md": false, 00:14:27.127 "write_zeroes": true, 00:14:27.127 "zcopy": true, 00:14:27.127 "get_zone_info": false, 00:14:27.127 "zone_management": false, 00:14:27.127 "zone_append": false, 00:14:27.127 "compare": false, 00:14:27.127 "compare_and_write": false, 00:14:27.127 "abort": true, 00:14:27.127 "seek_hole": false, 00:14:27.127 "seek_data": false, 00:14:27.127 "copy": true, 00:14:27.127 "nvme_iov_md": false 00:14:27.127 }, 00:14:27.127 "memory_domains": [ 00:14:27.127 { 00:14:27.127 "dma_device_id": "system", 00:14:27.127 "dma_device_type": 1 00:14:27.127 }, 00:14:27.127 { 00:14:27.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.127 "dma_device_type": 2 00:14:27.127 } 00:14:27.127 ], 00:14:27.127 "driver_specific": {} 00:14:27.127 } 00:14:27.127 ] 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.127 "name": "Existed_Raid", 00:14:27.127 "uuid": "ce037b69-f218-494f-a4f1-0a5260234692", 00:14:27.127 "strip_size_kb": 64, 00:14:27.127 "state": "online", 00:14:27.127 "raid_level": "raid5f", 00:14:27.127 "superblock": true, 00:14:27.127 "num_base_bdevs": 3, 00:14:27.127 "num_base_bdevs_discovered": 3, 00:14:27.127 "num_base_bdevs_operational": 3, 00:14:27.127 "base_bdevs_list": [ 00:14:27.127 { 00:14:27.127 "name": "BaseBdev1", 00:14:27.127 "uuid": "f7693a33-940e-4f08-8cc7-be82c20f77a9", 00:14:27.127 "is_configured": true, 00:14:27.127 "data_offset": 2048, 00:14:27.127 "data_size": 63488 00:14:27.127 }, 00:14:27.127 { 00:14:27.127 "name": "BaseBdev2", 00:14:27.127 "uuid": "7baea23a-1c41-4c31-97d3-3dddf4589c65", 00:14:27.127 "is_configured": true, 00:14:27.127 "data_offset": 2048, 00:14:27.127 "data_size": 63488 00:14:27.127 }, 00:14:27.127 { 00:14:27.127 "name": "BaseBdev3", 00:14:27.127 "uuid": "355e6f8e-1c26-495f-870c-9b3765f95057", 00:14:27.127 "is_configured": true, 00:14:27.127 "data_offset": 2048, 00:14:27.127 "data_size": 63488 00:14:27.127 } 00:14:27.127 ] 00:14:27.127 }' 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.127 13:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.695 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:27.695 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:27.695 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:27.695 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:27.695 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:27.695 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:27.695 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:27.695 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.695 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:27.695 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.695 [2024-11-26 13:27:16.090384] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.695 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.695 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:27.695 "name": "Existed_Raid", 00:14:27.695 "aliases": [ 00:14:27.695 "ce037b69-f218-494f-a4f1-0a5260234692" 00:14:27.695 ], 00:14:27.695 "product_name": "Raid Volume", 00:14:27.695 "block_size": 512, 00:14:27.695 "num_blocks": 126976, 00:14:27.695 "uuid": "ce037b69-f218-494f-a4f1-0a5260234692", 00:14:27.695 "assigned_rate_limits": { 00:14:27.695 "rw_ios_per_sec": 0, 00:14:27.695 "rw_mbytes_per_sec": 0, 00:14:27.695 "r_mbytes_per_sec": 0, 00:14:27.695 "w_mbytes_per_sec": 0 00:14:27.695 }, 00:14:27.695 "claimed": false, 00:14:27.695 "zoned": false, 00:14:27.695 "supported_io_types": { 00:14:27.695 "read": true, 00:14:27.695 "write": true, 00:14:27.695 "unmap": false, 00:14:27.695 "flush": false, 00:14:27.695 "reset": true, 00:14:27.695 "nvme_admin": false, 00:14:27.695 "nvme_io": false, 00:14:27.695 "nvme_io_md": false, 00:14:27.695 "write_zeroes": true, 00:14:27.695 "zcopy": false, 00:14:27.695 "get_zone_info": false, 00:14:27.695 "zone_management": false, 00:14:27.695 "zone_append": false, 00:14:27.695 "compare": false, 00:14:27.695 "compare_and_write": false, 00:14:27.695 "abort": false, 00:14:27.695 "seek_hole": false, 00:14:27.695 "seek_data": false, 00:14:27.695 "copy": false, 00:14:27.695 "nvme_iov_md": false 00:14:27.695 }, 00:14:27.695 "driver_specific": { 00:14:27.695 "raid": { 00:14:27.695 "uuid": "ce037b69-f218-494f-a4f1-0a5260234692", 00:14:27.695 "strip_size_kb": 64, 00:14:27.695 "state": "online", 00:14:27.695 "raid_level": "raid5f", 00:14:27.695 "superblock": true, 00:14:27.695 "num_base_bdevs": 3, 00:14:27.695 "num_base_bdevs_discovered": 3, 00:14:27.695 "num_base_bdevs_operational": 3, 00:14:27.695 "base_bdevs_list": [ 00:14:27.695 { 00:14:27.695 "name": "BaseBdev1", 00:14:27.695 "uuid": "f7693a33-940e-4f08-8cc7-be82c20f77a9", 00:14:27.695 "is_configured": true, 00:14:27.695 "data_offset": 2048, 00:14:27.695 "data_size": 63488 00:14:27.695 }, 00:14:27.695 { 00:14:27.695 "name": "BaseBdev2", 00:14:27.695 "uuid": "7baea23a-1c41-4c31-97d3-3dddf4589c65", 00:14:27.695 "is_configured": true, 00:14:27.695 "data_offset": 2048, 00:14:27.695 "data_size": 63488 00:14:27.695 }, 00:14:27.695 { 00:14:27.695 "name": "BaseBdev3", 00:14:27.695 "uuid": "355e6f8e-1c26-495f-870c-9b3765f95057", 00:14:27.695 "is_configured": true, 00:14:27.695 "data_offset": 2048, 00:14:27.695 "data_size": 63488 00:14:27.695 } 00:14:27.695 ] 00:14:27.695 } 00:14:27.695 } 00:14:27.695 }' 00:14:27.695 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:27.695 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:27.695 BaseBdev2 00:14:27.695 BaseBdev3' 00:14:27.695 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.696 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:27.696 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.696 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:27.696 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.696 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.696 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.955 [2024-11-26 13:27:16.418306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:27.955 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.956 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.215 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.215 "name": "Existed_Raid", 00:14:28.215 "uuid": "ce037b69-f218-494f-a4f1-0a5260234692", 00:14:28.215 "strip_size_kb": 64, 00:14:28.215 "state": "online", 00:14:28.215 "raid_level": "raid5f", 00:14:28.215 "superblock": true, 00:14:28.215 "num_base_bdevs": 3, 00:14:28.215 "num_base_bdevs_discovered": 2, 00:14:28.215 "num_base_bdevs_operational": 2, 00:14:28.215 "base_bdevs_list": [ 00:14:28.215 { 00:14:28.215 "name": null, 00:14:28.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.215 "is_configured": false, 00:14:28.215 "data_offset": 0, 00:14:28.215 "data_size": 63488 00:14:28.215 }, 00:14:28.215 { 00:14:28.215 "name": "BaseBdev2", 00:14:28.215 "uuid": "7baea23a-1c41-4c31-97d3-3dddf4589c65", 00:14:28.215 "is_configured": true, 00:14:28.215 "data_offset": 2048, 00:14:28.215 "data_size": 63488 00:14:28.215 }, 00:14:28.215 { 00:14:28.215 "name": "BaseBdev3", 00:14:28.215 "uuid": "355e6f8e-1c26-495f-870c-9b3765f95057", 00:14:28.215 "is_configured": true, 00:14:28.215 "data_offset": 2048, 00:14:28.215 "data_size": 63488 00:14:28.215 } 00:14:28.215 ] 00:14:28.215 }' 00:14:28.215 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.215 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.474 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:28.474 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:28.474 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.474 13:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:28.474 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.474 13:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.474 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.733 [2024-11-26 13:27:17.054976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:28.733 [2024-11-26 13:27:17.055151] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.733 [2024-11-26 13:27:17.119409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.733 [2024-11-26 13:27:17.183463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:28.733 [2024-11-26 13:27:17.183510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.733 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.992 BaseBdev2 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.992 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.992 [ 00:14:28.992 { 00:14:28.992 "name": "BaseBdev2", 00:14:28.992 "aliases": [ 00:14:28.992 "e91ba6d5-15fd-43a2-a83f-a11134302a44" 00:14:28.992 ], 00:14:28.992 "product_name": "Malloc disk", 00:14:28.992 "block_size": 512, 00:14:28.992 "num_blocks": 65536, 00:14:28.992 "uuid": "e91ba6d5-15fd-43a2-a83f-a11134302a44", 00:14:28.992 "assigned_rate_limits": { 00:14:28.992 "rw_ios_per_sec": 0, 00:14:28.992 "rw_mbytes_per_sec": 0, 00:14:28.992 "r_mbytes_per_sec": 0, 00:14:28.992 "w_mbytes_per_sec": 0 00:14:28.992 }, 00:14:28.992 "claimed": false, 00:14:28.992 "zoned": false, 00:14:28.992 "supported_io_types": { 00:14:28.992 "read": true, 00:14:28.992 "write": true, 00:14:28.992 "unmap": true, 00:14:28.992 "flush": true, 00:14:28.992 "reset": true, 00:14:28.993 "nvme_admin": false, 00:14:28.993 "nvme_io": false, 00:14:28.993 "nvme_io_md": false, 00:14:28.993 "write_zeroes": true, 00:14:28.993 "zcopy": true, 00:14:28.993 "get_zone_info": false, 00:14:28.993 "zone_management": false, 00:14:28.993 "zone_append": false, 00:14:28.993 "compare": false, 00:14:28.993 "compare_and_write": false, 00:14:28.993 "abort": true, 00:14:28.993 "seek_hole": false, 00:14:28.993 "seek_data": false, 00:14:28.993 "copy": true, 00:14:28.993 "nvme_iov_md": false 00:14:28.993 }, 00:14:28.993 "memory_domains": [ 00:14:28.993 { 00:14:28.993 "dma_device_id": "system", 00:14:28.993 "dma_device_type": 1 00:14:28.993 }, 00:14:28.993 { 00:14:28.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.993 "dma_device_type": 2 00:14:28.993 } 00:14:28.993 ], 00:14:28.993 "driver_specific": {} 00:14:28.993 } 00:14:28.993 ] 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.993 BaseBdev3 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.993 [ 00:14:28.993 { 00:14:28.993 "name": "BaseBdev3", 00:14:28.993 "aliases": [ 00:14:28.993 "f5028fe7-11c3-4979-a0c1-67fd5a27394c" 00:14:28.993 ], 00:14:28.993 "product_name": "Malloc disk", 00:14:28.993 "block_size": 512, 00:14:28.993 "num_blocks": 65536, 00:14:28.993 "uuid": "f5028fe7-11c3-4979-a0c1-67fd5a27394c", 00:14:28.993 "assigned_rate_limits": { 00:14:28.993 "rw_ios_per_sec": 0, 00:14:28.993 "rw_mbytes_per_sec": 0, 00:14:28.993 "r_mbytes_per_sec": 0, 00:14:28.993 "w_mbytes_per_sec": 0 00:14:28.993 }, 00:14:28.993 "claimed": false, 00:14:28.993 "zoned": false, 00:14:28.993 "supported_io_types": { 00:14:28.993 "read": true, 00:14:28.993 "write": true, 00:14:28.993 "unmap": true, 00:14:28.993 "flush": true, 00:14:28.993 "reset": true, 00:14:28.993 "nvme_admin": false, 00:14:28.993 "nvme_io": false, 00:14:28.993 "nvme_io_md": false, 00:14:28.993 "write_zeroes": true, 00:14:28.993 "zcopy": true, 00:14:28.993 "get_zone_info": false, 00:14:28.993 "zone_management": false, 00:14:28.993 "zone_append": false, 00:14:28.993 "compare": false, 00:14:28.993 "compare_and_write": false, 00:14:28.993 "abort": true, 00:14:28.993 "seek_hole": false, 00:14:28.993 "seek_data": false, 00:14:28.993 "copy": true, 00:14:28.993 "nvme_iov_md": false 00:14:28.993 }, 00:14:28.993 "memory_domains": [ 00:14:28.993 { 00:14:28.993 "dma_device_id": "system", 00:14:28.993 "dma_device_type": 1 00:14:28.993 }, 00:14:28.993 { 00:14:28.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.993 "dma_device_type": 2 00:14:28.993 } 00:14:28.993 ], 00:14:28.993 "driver_specific": {} 00:14:28.993 } 00:14:28.993 ] 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.993 [2024-11-26 13:27:17.446293] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:28.993 [2024-11-26 13:27:17.446343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:28.993 [2024-11-26 13:27:17.446370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.993 [2024-11-26 13:27:17.448352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.993 "name": "Existed_Raid", 00:14:28.993 "uuid": "01afa87a-aa9a-4525-884b-47639872aebe", 00:14:28.993 "strip_size_kb": 64, 00:14:28.993 "state": "configuring", 00:14:28.993 "raid_level": "raid5f", 00:14:28.993 "superblock": true, 00:14:28.993 "num_base_bdevs": 3, 00:14:28.993 "num_base_bdevs_discovered": 2, 00:14:28.993 "num_base_bdevs_operational": 3, 00:14:28.993 "base_bdevs_list": [ 00:14:28.993 { 00:14:28.993 "name": "BaseBdev1", 00:14:28.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.993 "is_configured": false, 00:14:28.993 "data_offset": 0, 00:14:28.993 "data_size": 0 00:14:28.993 }, 00:14:28.993 { 00:14:28.993 "name": "BaseBdev2", 00:14:28.993 "uuid": "e91ba6d5-15fd-43a2-a83f-a11134302a44", 00:14:28.993 "is_configured": true, 00:14:28.993 "data_offset": 2048, 00:14:28.993 "data_size": 63488 00:14:28.993 }, 00:14:28.993 { 00:14:28.993 "name": "BaseBdev3", 00:14:28.993 "uuid": "f5028fe7-11c3-4979-a0c1-67fd5a27394c", 00:14:28.993 "is_configured": true, 00:14:28.993 "data_offset": 2048, 00:14:28.993 "data_size": 63488 00:14:28.993 } 00:14:28.993 ] 00:14:28.993 }' 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.993 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.561 [2024-11-26 13:27:17.982413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.561 13:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.561 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.561 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.561 "name": "Existed_Raid", 00:14:29.561 "uuid": "01afa87a-aa9a-4525-884b-47639872aebe", 00:14:29.561 "strip_size_kb": 64, 00:14:29.561 "state": "configuring", 00:14:29.561 "raid_level": "raid5f", 00:14:29.561 "superblock": true, 00:14:29.561 "num_base_bdevs": 3, 00:14:29.561 "num_base_bdevs_discovered": 1, 00:14:29.561 "num_base_bdevs_operational": 3, 00:14:29.561 "base_bdevs_list": [ 00:14:29.561 { 00:14:29.561 "name": "BaseBdev1", 00:14:29.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.561 "is_configured": false, 00:14:29.561 "data_offset": 0, 00:14:29.561 "data_size": 0 00:14:29.561 }, 00:14:29.561 { 00:14:29.561 "name": null, 00:14:29.561 "uuid": "e91ba6d5-15fd-43a2-a83f-a11134302a44", 00:14:29.561 "is_configured": false, 00:14:29.561 "data_offset": 0, 00:14:29.561 "data_size": 63488 00:14:29.561 }, 00:14:29.561 { 00:14:29.562 "name": "BaseBdev3", 00:14:29.562 "uuid": "f5028fe7-11c3-4979-a0c1-67fd5a27394c", 00:14:29.562 "is_configured": true, 00:14:29.562 "data_offset": 2048, 00:14:29.562 "data_size": 63488 00:14:29.562 } 00:14:29.562 ] 00:14:29.562 }' 00:14:29.562 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.562 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.130 [2024-11-26 13:27:18.590227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.130 BaseBdev1 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.130 [ 00:14:30.130 { 00:14:30.130 "name": "BaseBdev1", 00:14:30.130 "aliases": [ 00:14:30.130 "14edf2da-a217-4b50-8877-d3bbc011ece4" 00:14:30.130 ], 00:14:30.130 "product_name": "Malloc disk", 00:14:30.130 "block_size": 512, 00:14:30.130 "num_blocks": 65536, 00:14:30.130 "uuid": "14edf2da-a217-4b50-8877-d3bbc011ece4", 00:14:30.130 "assigned_rate_limits": { 00:14:30.130 "rw_ios_per_sec": 0, 00:14:30.130 "rw_mbytes_per_sec": 0, 00:14:30.130 "r_mbytes_per_sec": 0, 00:14:30.130 "w_mbytes_per_sec": 0 00:14:30.130 }, 00:14:30.130 "claimed": true, 00:14:30.130 "claim_type": "exclusive_write", 00:14:30.130 "zoned": false, 00:14:30.130 "supported_io_types": { 00:14:30.130 "read": true, 00:14:30.130 "write": true, 00:14:30.130 "unmap": true, 00:14:30.130 "flush": true, 00:14:30.130 "reset": true, 00:14:30.130 "nvme_admin": false, 00:14:30.130 "nvme_io": false, 00:14:30.130 "nvme_io_md": false, 00:14:30.130 "write_zeroes": true, 00:14:30.130 "zcopy": true, 00:14:30.130 "get_zone_info": false, 00:14:30.130 "zone_management": false, 00:14:30.130 "zone_append": false, 00:14:30.130 "compare": false, 00:14:30.130 "compare_and_write": false, 00:14:30.130 "abort": true, 00:14:30.130 "seek_hole": false, 00:14:30.130 "seek_data": false, 00:14:30.130 "copy": true, 00:14:30.130 "nvme_iov_md": false 00:14:30.130 }, 00:14:30.130 "memory_domains": [ 00:14:30.130 { 00:14:30.130 "dma_device_id": "system", 00:14:30.130 "dma_device_type": 1 00:14:30.130 }, 00:14:30.130 { 00:14:30.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.130 "dma_device_type": 2 00:14:30.130 } 00:14:30.130 ], 00:14:30.130 "driver_specific": {} 00:14:30.130 } 00:14:30.130 ] 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.130 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.131 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.131 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.131 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.131 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.131 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.131 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.131 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.131 "name": "Existed_Raid", 00:14:30.131 "uuid": "01afa87a-aa9a-4525-884b-47639872aebe", 00:14:30.131 "strip_size_kb": 64, 00:14:30.131 "state": "configuring", 00:14:30.131 "raid_level": "raid5f", 00:14:30.131 "superblock": true, 00:14:30.131 "num_base_bdevs": 3, 00:14:30.131 "num_base_bdevs_discovered": 2, 00:14:30.131 "num_base_bdevs_operational": 3, 00:14:30.131 "base_bdevs_list": [ 00:14:30.131 { 00:14:30.131 "name": "BaseBdev1", 00:14:30.131 "uuid": "14edf2da-a217-4b50-8877-d3bbc011ece4", 00:14:30.131 "is_configured": true, 00:14:30.131 "data_offset": 2048, 00:14:30.131 "data_size": 63488 00:14:30.131 }, 00:14:30.131 { 00:14:30.131 "name": null, 00:14:30.131 "uuid": "e91ba6d5-15fd-43a2-a83f-a11134302a44", 00:14:30.131 "is_configured": false, 00:14:30.131 "data_offset": 0, 00:14:30.131 "data_size": 63488 00:14:30.131 }, 00:14:30.131 { 00:14:30.131 "name": "BaseBdev3", 00:14:30.131 "uuid": "f5028fe7-11c3-4979-a0c1-67fd5a27394c", 00:14:30.131 "is_configured": true, 00:14:30.131 "data_offset": 2048, 00:14:30.131 "data_size": 63488 00:14:30.131 } 00:14:30.131 ] 00:14:30.131 }' 00:14:30.131 13:27:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.131 13:27:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.699 [2024-11-26 13:27:19.210453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.699 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.957 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.957 "name": "Existed_Raid", 00:14:30.957 "uuid": "01afa87a-aa9a-4525-884b-47639872aebe", 00:14:30.957 "strip_size_kb": 64, 00:14:30.957 "state": "configuring", 00:14:30.957 "raid_level": "raid5f", 00:14:30.957 "superblock": true, 00:14:30.957 "num_base_bdevs": 3, 00:14:30.957 "num_base_bdevs_discovered": 1, 00:14:30.957 "num_base_bdevs_operational": 3, 00:14:30.957 "base_bdevs_list": [ 00:14:30.957 { 00:14:30.957 "name": "BaseBdev1", 00:14:30.957 "uuid": "14edf2da-a217-4b50-8877-d3bbc011ece4", 00:14:30.957 "is_configured": true, 00:14:30.957 "data_offset": 2048, 00:14:30.957 "data_size": 63488 00:14:30.957 }, 00:14:30.957 { 00:14:30.957 "name": null, 00:14:30.958 "uuid": "e91ba6d5-15fd-43a2-a83f-a11134302a44", 00:14:30.958 "is_configured": false, 00:14:30.958 "data_offset": 0, 00:14:30.958 "data_size": 63488 00:14:30.958 }, 00:14:30.958 { 00:14:30.958 "name": null, 00:14:30.958 "uuid": "f5028fe7-11c3-4979-a0c1-67fd5a27394c", 00:14:30.958 "is_configured": false, 00:14:30.958 "data_offset": 0, 00:14:30.958 "data_size": 63488 00:14:30.958 } 00:14:30.958 ] 00:14:30.958 }' 00:14:30.958 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.958 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.216 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.216 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.216 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:31.216 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.216 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.475 [2024-11-26 13:27:19.790610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.475 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.476 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.476 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.476 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.476 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.476 "name": "Existed_Raid", 00:14:31.476 "uuid": "01afa87a-aa9a-4525-884b-47639872aebe", 00:14:31.476 "strip_size_kb": 64, 00:14:31.476 "state": "configuring", 00:14:31.476 "raid_level": "raid5f", 00:14:31.476 "superblock": true, 00:14:31.476 "num_base_bdevs": 3, 00:14:31.476 "num_base_bdevs_discovered": 2, 00:14:31.476 "num_base_bdevs_operational": 3, 00:14:31.476 "base_bdevs_list": [ 00:14:31.476 { 00:14:31.476 "name": "BaseBdev1", 00:14:31.476 "uuid": "14edf2da-a217-4b50-8877-d3bbc011ece4", 00:14:31.476 "is_configured": true, 00:14:31.476 "data_offset": 2048, 00:14:31.476 "data_size": 63488 00:14:31.476 }, 00:14:31.476 { 00:14:31.476 "name": null, 00:14:31.476 "uuid": "e91ba6d5-15fd-43a2-a83f-a11134302a44", 00:14:31.476 "is_configured": false, 00:14:31.476 "data_offset": 0, 00:14:31.476 "data_size": 63488 00:14:31.476 }, 00:14:31.476 { 00:14:31.476 "name": "BaseBdev3", 00:14:31.476 "uuid": "f5028fe7-11c3-4979-a0c1-67fd5a27394c", 00:14:31.476 "is_configured": true, 00:14:31.476 "data_offset": 2048, 00:14:31.476 "data_size": 63488 00:14:31.476 } 00:14:31.476 ] 00:14:31.476 }' 00:14:31.476 13:27:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.476 13:27:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.043 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.043 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:32.043 13:27:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.043 13:27:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.043 13:27:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.043 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:32.043 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:32.043 13:27:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.043 13:27:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.043 [2024-11-26 13:27:20.378799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:32.043 13:27:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.043 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.043 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.044 "name": "Existed_Raid", 00:14:32.044 "uuid": "01afa87a-aa9a-4525-884b-47639872aebe", 00:14:32.044 "strip_size_kb": 64, 00:14:32.044 "state": "configuring", 00:14:32.044 "raid_level": "raid5f", 00:14:32.044 "superblock": true, 00:14:32.044 "num_base_bdevs": 3, 00:14:32.044 "num_base_bdevs_discovered": 1, 00:14:32.044 "num_base_bdevs_operational": 3, 00:14:32.044 "base_bdevs_list": [ 00:14:32.044 { 00:14:32.044 "name": null, 00:14:32.044 "uuid": "14edf2da-a217-4b50-8877-d3bbc011ece4", 00:14:32.044 "is_configured": false, 00:14:32.044 "data_offset": 0, 00:14:32.044 "data_size": 63488 00:14:32.044 }, 00:14:32.044 { 00:14:32.044 "name": null, 00:14:32.044 "uuid": "e91ba6d5-15fd-43a2-a83f-a11134302a44", 00:14:32.044 "is_configured": false, 00:14:32.044 "data_offset": 0, 00:14:32.044 "data_size": 63488 00:14:32.044 }, 00:14:32.044 { 00:14:32.044 "name": "BaseBdev3", 00:14:32.044 "uuid": "f5028fe7-11c3-4979-a0c1-67fd5a27394c", 00:14:32.044 "is_configured": true, 00:14:32.044 "data_offset": 2048, 00:14:32.044 "data_size": 63488 00:14:32.044 } 00:14:32.044 ] 00:14:32.044 }' 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.044 13:27:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.611 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.611 13:27:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:32.611 13:27:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.611 13:27:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.611 13:27:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.611 [2024-11-26 13:27:21.033796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.611 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.611 "name": "Existed_Raid", 00:14:32.611 "uuid": "01afa87a-aa9a-4525-884b-47639872aebe", 00:14:32.611 "strip_size_kb": 64, 00:14:32.611 "state": "configuring", 00:14:32.611 "raid_level": "raid5f", 00:14:32.611 "superblock": true, 00:14:32.611 "num_base_bdevs": 3, 00:14:32.611 "num_base_bdevs_discovered": 2, 00:14:32.611 "num_base_bdevs_operational": 3, 00:14:32.611 "base_bdevs_list": [ 00:14:32.611 { 00:14:32.611 "name": null, 00:14:32.611 "uuid": "14edf2da-a217-4b50-8877-d3bbc011ece4", 00:14:32.611 "is_configured": false, 00:14:32.611 "data_offset": 0, 00:14:32.611 "data_size": 63488 00:14:32.611 }, 00:14:32.611 { 00:14:32.611 "name": "BaseBdev2", 00:14:32.611 "uuid": "e91ba6d5-15fd-43a2-a83f-a11134302a44", 00:14:32.611 "is_configured": true, 00:14:32.611 "data_offset": 2048, 00:14:32.611 "data_size": 63488 00:14:32.611 }, 00:14:32.611 { 00:14:32.611 "name": "BaseBdev3", 00:14:32.611 "uuid": "f5028fe7-11c3-4979-a0c1-67fd5a27394c", 00:14:32.611 "is_configured": true, 00:14:32.612 "data_offset": 2048, 00:14:32.612 "data_size": 63488 00:14:32.612 } 00:14:32.612 ] 00:14:32.612 }' 00:14:32.612 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.612 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.179 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 14edf2da-a217-4b50-8877-d3bbc011ece4 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.180 [2024-11-26 13:27:21.700551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:33.180 [2024-11-26 13:27:21.700758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:33.180 [2024-11-26 13:27:21.700778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:33.180 [2024-11-26 13:27:21.701021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:33.180 NewBaseBdev 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.180 [2024-11-26 13:27:21.704960] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:33.180 [2024-11-26 13:27:21.704983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:33.180 [2024-11-26 13:27:21.705137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.180 [ 00:14:33.180 { 00:14:33.180 "name": "NewBaseBdev", 00:14:33.180 "aliases": [ 00:14:33.180 "14edf2da-a217-4b50-8877-d3bbc011ece4" 00:14:33.180 ], 00:14:33.180 "product_name": "Malloc disk", 00:14:33.180 "block_size": 512, 00:14:33.180 "num_blocks": 65536, 00:14:33.180 "uuid": "14edf2da-a217-4b50-8877-d3bbc011ece4", 00:14:33.180 "assigned_rate_limits": { 00:14:33.180 "rw_ios_per_sec": 0, 00:14:33.180 "rw_mbytes_per_sec": 0, 00:14:33.180 "r_mbytes_per_sec": 0, 00:14:33.180 "w_mbytes_per_sec": 0 00:14:33.180 }, 00:14:33.180 "claimed": true, 00:14:33.180 "claim_type": "exclusive_write", 00:14:33.180 "zoned": false, 00:14:33.180 "supported_io_types": { 00:14:33.180 "read": true, 00:14:33.180 "write": true, 00:14:33.180 "unmap": true, 00:14:33.180 "flush": true, 00:14:33.180 "reset": true, 00:14:33.180 "nvme_admin": false, 00:14:33.180 "nvme_io": false, 00:14:33.180 "nvme_io_md": false, 00:14:33.180 "write_zeroes": true, 00:14:33.180 "zcopy": true, 00:14:33.180 "get_zone_info": false, 00:14:33.180 "zone_management": false, 00:14:33.180 "zone_append": false, 00:14:33.180 "compare": false, 00:14:33.180 "compare_and_write": false, 00:14:33.180 "abort": true, 00:14:33.180 "seek_hole": false, 00:14:33.180 "seek_data": false, 00:14:33.180 "copy": true, 00:14:33.180 "nvme_iov_md": false 00:14:33.180 }, 00:14:33.180 "memory_domains": [ 00:14:33.180 { 00:14:33.180 "dma_device_id": "system", 00:14:33.180 "dma_device_type": 1 00:14:33.180 }, 00:14:33.180 { 00:14:33.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.180 "dma_device_type": 2 00:14:33.180 } 00:14:33.180 ], 00:14:33.180 "driver_specific": {} 00:14:33.180 } 00:14:33.180 ] 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.180 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.439 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.439 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.439 "name": "Existed_Raid", 00:14:33.439 "uuid": "01afa87a-aa9a-4525-884b-47639872aebe", 00:14:33.439 "strip_size_kb": 64, 00:14:33.439 "state": "online", 00:14:33.439 "raid_level": "raid5f", 00:14:33.439 "superblock": true, 00:14:33.439 "num_base_bdevs": 3, 00:14:33.439 "num_base_bdevs_discovered": 3, 00:14:33.439 "num_base_bdevs_operational": 3, 00:14:33.439 "base_bdevs_list": [ 00:14:33.439 { 00:14:33.439 "name": "NewBaseBdev", 00:14:33.439 "uuid": "14edf2da-a217-4b50-8877-d3bbc011ece4", 00:14:33.439 "is_configured": true, 00:14:33.439 "data_offset": 2048, 00:14:33.439 "data_size": 63488 00:14:33.439 }, 00:14:33.439 { 00:14:33.439 "name": "BaseBdev2", 00:14:33.439 "uuid": "e91ba6d5-15fd-43a2-a83f-a11134302a44", 00:14:33.439 "is_configured": true, 00:14:33.439 "data_offset": 2048, 00:14:33.439 "data_size": 63488 00:14:33.439 }, 00:14:33.439 { 00:14:33.439 "name": "BaseBdev3", 00:14:33.439 "uuid": "f5028fe7-11c3-4979-a0c1-67fd5a27394c", 00:14:33.439 "is_configured": true, 00:14:33.439 "data_offset": 2048, 00:14:33.439 "data_size": 63488 00:14:33.439 } 00:14:33.439 ] 00:14:33.439 }' 00:14:33.439 13:27:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.439 13:27:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.698 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:33.698 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:33.698 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:33.698 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:33.698 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:33.698 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:33.698 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:33.698 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:33.698 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.698 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.698 [2024-11-26 13:27:22.253966] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.957 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:33.958 "name": "Existed_Raid", 00:14:33.958 "aliases": [ 00:14:33.958 "01afa87a-aa9a-4525-884b-47639872aebe" 00:14:33.958 ], 00:14:33.958 "product_name": "Raid Volume", 00:14:33.958 "block_size": 512, 00:14:33.958 "num_blocks": 126976, 00:14:33.958 "uuid": "01afa87a-aa9a-4525-884b-47639872aebe", 00:14:33.958 "assigned_rate_limits": { 00:14:33.958 "rw_ios_per_sec": 0, 00:14:33.958 "rw_mbytes_per_sec": 0, 00:14:33.958 "r_mbytes_per_sec": 0, 00:14:33.958 "w_mbytes_per_sec": 0 00:14:33.958 }, 00:14:33.958 "claimed": false, 00:14:33.958 "zoned": false, 00:14:33.958 "supported_io_types": { 00:14:33.958 "read": true, 00:14:33.958 "write": true, 00:14:33.958 "unmap": false, 00:14:33.958 "flush": false, 00:14:33.958 "reset": true, 00:14:33.958 "nvme_admin": false, 00:14:33.958 "nvme_io": false, 00:14:33.958 "nvme_io_md": false, 00:14:33.958 "write_zeroes": true, 00:14:33.958 "zcopy": false, 00:14:33.958 "get_zone_info": false, 00:14:33.958 "zone_management": false, 00:14:33.958 "zone_append": false, 00:14:33.958 "compare": false, 00:14:33.958 "compare_and_write": false, 00:14:33.958 "abort": false, 00:14:33.958 "seek_hole": false, 00:14:33.958 "seek_data": false, 00:14:33.958 "copy": false, 00:14:33.958 "nvme_iov_md": false 00:14:33.958 }, 00:14:33.958 "driver_specific": { 00:14:33.958 "raid": { 00:14:33.958 "uuid": "01afa87a-aa9a-4525-884b-47639872aebe", 00:14:33.958 "strip_size_kb": 64, 00:14:33.958 "state": "online", 00:14:33.958 "raid_level": "raid5f", 00:14:33.958 "superblock": true, 00:14:33.958 "num_base_bdevs": 3, 00:14:33.958 "num_base_bdevs_discovered": 3, 00:14:33.958 "num_base_bdevs_operational": 3, 00:14:33.958 "base_bdevs_list": [ 00:14:33.958 { 00:14:33.958 "name": "NewBaseBdev", 00:14:33.958 "uuid": "14edf2da-a217-4b50-8877-d3bbc011ece4", 00:14:33.958 "is_configured": true, 00:14:33.958 "data_offset": 2048, 00:14:33.958 "data_size": 63488 00:14:33.958 }, 00:14:33.958 { 00:14:33.958 "name": "BaseBdev2", 00:14:33.958 "uuid": "e91ba6d5-15fd-43a2-a83f-a11134302a44", 00:14:33.958 "is_configured": true, 00:14:33.958 "data_offset": 2048, 00:14:33.958 "data_size": 63488 00:14:33.958 }, 00:14:33.958 { 00:14:33.958 "name": "BaseBdev3", 00:14:33.958 "uuid": "f5028fe7-11c3-4979-a0c1-67fd5a27394c", 00:14:33.958 "is_configured": true, 00:14:33.958 "data_offset": 2048, 00:14:33.958 "data_size": 63488 00:14:33.958 } 00:14:33.958 ] 00:14:33.958 } 00:14:33.958 } 00:14:33.958 }' 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:33.958 BaseBdev2 00:14:33.958 BaseBdev3' 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.958 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.218 [2024-11-26 13:27:22.581876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:34.218 [2024-11-26 13:27:22.581900] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.218 [2024-11-26 13:27:22.581971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.218 [2024-11-26 13:27:22.582270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.218 [2024-11-26 13:27:22.582291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80131 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80131 ']' 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80131 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80131 00:14:34.218 killing process with pid 80131 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80131' 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80131 00:14:34.218 [2024-11-26 13:27:22.618945] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.218 13:27:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80131 00:14:34.477 [2024-11-26 13:27:22.820620] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.414 13:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:35.414 00:14:35.414 real 0m11.501s 00:14:35.414 user 0m19.394s 00:14:35.414 sys 0m1.654s 00:14:35.414 13:27:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.414 ************************************ 00:14:35.414 END TEST raid5f_state_function_test_sb 00:14:35.414 ************************************ 00:14:35.414 13:27:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.414 13:27:23 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:35.414 13:27:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:35.414 13:27:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.414 13:27:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.414 ************************************ 00:14:35.414 START TEST raid5f_superblock_test 00:14:35.414 ************************************ 00:14:35.414 13:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:35.414 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:35.414 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:35.414 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:35.414 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:35.414 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:35.414 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:35.414 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:35.414 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:35.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80763 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80763 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 80763 ']' 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.415 13:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.415 [2024-11-26 13:27:23.823651] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:14:35.415 [2024-11-26 13:27:23.824066] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80763 ] 00:14:35.674 [2024-11-26 13:27:24.006145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.674 [2024-11-26 13:27:24.104806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.933 [2024-11-26 13:27:24.272358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.933 [2024-11-26 13:27:24.272670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.501 malloc1 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.501 [2024-11-26 13:27:24.834170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:36.501 [2024-11-26 13:27:24.834491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.501 [2024-11-26 13:27:24.834535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:36.501 [2024-11-26 13:27:24.834551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.501 [2024-11-26 13:27:24.836981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.501 [2024-11-26 13:27:24.837022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:36.501 pt1 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.501 malloc2 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.501 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.501 [2024-11-26 13:27:24.879582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:36.501 [2024-11-26 13:27:24.879638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.501 [2024-11-26 13:27:24.879666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:36.502 [2024-11-26 13:27:24.879680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.502 [2024-11-26 13:27:24.882011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.502 [2024-11-26 13:27:24.882049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:36.502 pt2 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.502 malloc3 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.502 [2024-11-26 13:27:24.932818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:36.502 [2024-11-26 13:27:24.932872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.502 [2024-11-26 13:27:24.932902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:36.502 [2024-11-26 13:27:24.932915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.502 [2024-11-26 13:27:24.935275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.502 [2024-11-26 13:27:24.935314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:36.502 pt3 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.502 [2024-11-26 13:27:24.944880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:36.502 [2024-11-26 13:27:24.946944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:36.502 [2024-11-26 13:27:24.947024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:36.502 [2024-11-26 13:27:24.947215] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:36.502 [2024-11-26 13:27:24.947254] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:36.502 [2024-11-26 13:27:24.947503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:36.502 [2024-11-26 13:27:24.951634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:36.502 [2024-11-26 13:27:24.951658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:36.502 [2024-11-26 13:27:24.951852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.502 13:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.502 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.502 "name": "raid_bdev1", 00:14:36.502 "uuid": "1b7995fc-3eba-484d-a674-bb4b2833ce16", 00:14:36.502 "strip_size_kb": 64, 00:14:36.502 "state": "online", 00:14:36.502 "raid_level": "raid5f", 00:14:36.502 "superblock": true, 00:14:36.502 "num_base_bdevs": 3, 00:14:36.502 "num_base_bdevs_discovered": 3, 00:14:36.502 "num_base_bdevs_operational": 3, 00:14:36.502 "base_bdevs_list": [ 00:14:36.502 { 00:14:36.502 "name": "pt1", 00:14:36.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:36.502 "is_configured": true, 00:14:36.502 "data_offset": 2048, 00:14:36.502 "data_size": 63488 00:14:36.502 }, 00:14:36.502 { 00:14:36.502 "name": "pt2", 00:14:36.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.502 "is_configured": true, 00:14:36.502 "data_offset": 2048, 00:14:36.502 "data_size": 63488 00:14:36.502 }, 00:14:36.502 { 00:14:36.502 "name": "pt3", 00:14:36.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:36.502 "is_configured": true, 00:14:36.502 "data_offset": 2048, 00:14:36.502 "data_size": 63488 00:14:36.502 } 00:14:36.502 ] 00:14:36.502 }' 00:14:36.502 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.502 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.070 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:37.070 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:37.070 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:37.070 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:37.070 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:37.070 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:37.070 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.070 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.070 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.070 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:37.070 [2024-11-26 13:27:25.484783] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.070 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.070 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:37.070 "name": "raid_bdev1", 00:14:37.070 "aliases": [ 00:14:37.070 "1b7995fc-3eba-484d-a674-bb4b2833ce16" 00:14:37.070 ], 00:14:37.070 "product_name": "Raid Volume", 00:14:37.070 "block_size": 512, 00:14:37.070 "num_blocks": 126976, 00:14:37.070 "uuid": "1b7995fc-3eba-484d-a674-bb4b2833ce16", 00:14:37.070 "assigned_rate_limits": { 00:14:37.070 "rw_ios_per_sec": 0, 00:14:37.070 "rw_mbytes_per_sec": 0, 00:14:37.070 "r_mbytes_per_sec": 0, 00:14:37.070 "w_mbytes_per_sec": 0 00:14:37.070 }, 00:14:37.070 "claimed": false, 00:14:37.070 "zoned": false, 00:14:37.070 "supported_io_types": { 00:14:37.070 "read": true, 00:14:37.070 "write": true, 00:14:37.070 "unmap": false, 00:14:37.070 "flush": false, 00:14:37.070 "reset": true, 00:14:37.070 "nvme_admin": false, 00:14:37.070 "nvme_io": false, 00:14:37.070 "nvme_io_md": false, 00:14:37.070 "write_zeroes": true, 00:14:37.070 "zcopy": false, 00:14:37.070 "get_zone_info": false, 00:14:37.070 "zone_management": false, 00:14:37.070 "zone_append": false, 00:14:37.070 "compare": false, 00:14:37.070 "compare_and_write": false, 00:14:37.070 "abort": false, 00:14:37.070 "seek_hole": false, 00:14:37.070 "seek_data": false, 00:14:37.070 "copy": false, 00:14:37.070 "nvme_iov_md": false 00:14:37.070 }, 00:14:37.070 "driver_specific": { 00:14:37.070 "raid": { 00:14:37.070 "uuid": "1b7995fc-3eba-484d-a674-bb4b2833ce16", 00:14:37.070 "strip_size_kb": 64, 00:14:37.070 "state": "online", 00:14:37.070 "raid_level": "raid5f", 00:14:37.070 "superblock": true, 00:14:37.070 "num_base_bdevs": 3, 00:14:37.070 "num_base_bdevs_discovered": 3, 00:14:37.070 "num_base_bdevs_operational": 3, 00:14:37.070 "base_bdevs_list": [ 00:14:37.070 { 00:14:37.070 "name": "pt1", 00:14:37.070 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:37.070 "is_configured": true, 00:14:37.070 "data_offset": 2048, 00:14:37.070 "data_size": 63488 00:14:37.070 }, 00:14:37.070 { 00:14:37.070 "name": "pt2", 00:14:37.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.070 "is_configured": true, 00:14:37.070 "data_offset": 2048, 00:14:37.070 "data_size": 63488 00:14:37.070 }, 00:14:37.070 { 00:14:37.070 "name": "pt3", 00:14:37.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.070 "is_configured": true, 00:14:37.070 "data_offset": 2048, 00:14:37.070 "data_size": 63488 00:14:37.070 } 00:14:37.070 ] 00:14:37.070 } 00:14:37.070 } 00:14:37.070 }' 00:14:37.070 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:37.070 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:37.070 pt2 00:14:37.070 pt3' 00:14:37.070 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.329 [2024-11-26 13:27:25.816845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1b7995fc-3eba-484d-a674-bb4b2833ce16 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1b7995fc-3eba-484d-a674-bb4b2833ce16 ']' 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.329 [2024-11-26 13:27:25.864720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.329 [2024-11-26 13:27:25.864746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.329 [2024-11-26 13:27:25.864804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.329 [2024-11-26 13:27:25.864874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.329 [2024-11-26 13:27:25.864887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.329 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.589 13:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.589 [2024-11-26 13:27:26.012795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:37.589 [2024-11-26 13:27:26.015048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:37.589 [2024-11-26 13:27:26.015115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:37.589 [2024-11-26 13:27:26.015200] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:37.589 [2024-11-26 13:27:26.015288] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:37.589 [2024-11-26 13:27:26.015322] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:37.589 [2024-11-26 13:27:26.015348] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.589 [2024-11-26 13:27:26.015360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:37.589 request: 00:14:37.589 { 00:14:37.589 "name": "raid_bdev1", 00:14:37.589 "raid_level": "raid5f", 00:14:37.589 "base_bdevs": [ 00:14:37.589 "malloc1", 00:14:37.589 "malloc2", 00:14:37.589 "malloc3" 00:14:37.589 ], 00:14:37.589 "strip_size_kb": 64, 00:14:37.589 "superblock": false, 00:14:37.589 "method": "bdev_raid_create", 00:14:37.589 "req_id": 1 00:14:37.589 } 00:14:37.589 Got JSON-RPC error response 00:14:37.589 response: 00:14:37.589 { 00:14:37.589 "code": -17, 00:14:37.589 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:37.589 } 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.589 [2024-11-26 13:27:26.080764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:37.589 [2024-11-26 13:27:26.080934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.589 [2024-11-26 13:27:26.080999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:37.589 [2024-11-26 13:27:26.081114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.589 [2024-11-26 13:27:26.083753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.589 [2024-11-26 13:27:26.083905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:37.589 [2024-11-26 13:27:26.084079] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:37.589 [2024-11-26 13:27:26.084265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:37.589 pt1 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.589 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.590 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.590 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.590 "name": "raid_bdev1", 00:14:37.590 "uuid": "1b7995fc-3eba-484d-a674-bb4b2833ce16", 00:14:37.590 "strip_size_kb": 64, 00:14:37.590 "state": "configuring", 00:14:37.590 "raid_level": "raid5f", 00:14:37.590 "superblock": true, 00:14:37.590 "num_base_bdevs": 3, 00:14:37.590 "num_base_bdevs_discovered": 1, 00:14:37.590 "num_base_bdevs_operational": 3, 00:14:37.590 "base_bdevs_list": [ 00:14:37.590 { 00:14:37.590 "name": "pt1", 00:14:37.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:37.590 "is_configured": true, 00:14:37.590 "data_offset": 2048, 00:14:37.590 "data_size": 63488 00:14:37.590 }, 00:14:37.590 { 00:14:37.590 "name": null, 00:14:37.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.590 "is_configured": false, 00:14:37.590 "data_offset": 2048, 00:14:37.590 "data_size": 63488 00:14:37.590 }, 00:14:37.590 { 00:14:37.590 "name": null, 00:14:37.590 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.590 "is_configured": false, 00:14:37.590 "data_offset": 2048, 00:14:37.590 "data_size": 63488 00:14:37.590 } 00:14:37.590 ] 00:14:37.590 }' 00:14:37.590 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.590 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.157 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:38.157 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:38.157 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.157 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.157 [2024-11-26 13:27:26.624867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:38.157 [2024-11-26 13:27:26.625045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.157 [2024-11-26 13:27:26.625082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:38.158 [2024-11-26 13:27:26.625097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.158 [2024-11-26 13:27:26.625538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.158 [2024-11-26 13:27:26.625566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:38.158 [2024-11-26 13:27:26.625654] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:38.158 [2024-11-26 13:27:26.625678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:38.158 pt2 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.158 [2024-11-26 13:27:26.632899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.158 "name": "raid_bdev1", 00:14:38.158 "uuid": "1b7995fc-3eba-484d-a674-bb4b2833ce16", 00:14:38.158 "strip_size_kb": 64, 00:14:38.158 "state": "configuring", 00:14:38.158 "raid_level": "raid5f", 00:14:38.158 "superblock": true, 00:14:38.158 "num_base_bdevs": 3, 00:14:38.158 "num_base_bdevs_discovered": 1, 00:14:38.158 "num_base_bdevs_operational": 3, 00:14:38.158 "base_bdevs_list": [ 00:14:38.158 { 00:14:38.158 "name": "pt1", 00:14:38.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.158 "is_configured": true, 00:14:38.158 "data_offset": 2048, 00:14:38.158 "data_size": 63488 00:14:38.158 }, 00:14:38.158 { 00:14:38.158 "name": null, 00:14:38.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.158 "is_configured": false, 00:14:38.158 "data_offset": 0, 00:14:38.158 "data_size": 63488 00:14:38.158 }, 00:14:38.158 { 00:14:38.158 "name": null, 00:14:38.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.158 "is_configured": false, 00:14:38.158 "data_offset": 2048, 00:14:38.158 "data_size": 63488 00:14:38.158 } 00:14:38.158 ] 00:14:38.158 }' 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.158 13:27:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.727 [2024-11-26 13:27:27.168964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:38.727 [2024-11-26 13:27:27.169021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.727 [2024-11-26 13:27:27.169040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:38.727 [2024-11-26 13:27:27.169054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.727 [2024-11-26 13:27:27.169433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.727 [2024-11-26 13:27:27.169460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:38.727 [2024-11-26 13:27:27.169521] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:38.727 [2024-11-26 13:27:27.169550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:38.727 pt2 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.727 [2024-11-26 13:27:27.180980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:38.727 [2024-11-26 13:27:27.181030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.727 [2024-11-26 13:27:27.181049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:38.727 [2024-11-26 13:27:27.181063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.727 [2024-11-26 13:27:27.181429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.727 [2024-11-26 13:27:27.181458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:38.727 [2024-11-26 13:27:27.181519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:38.727 [2024-11-26 13:27:27.181547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:38.727 [2024-11-26 13:27:27.181668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:38.727 [2024-11-26 13:27:27.181686] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:38.727 [2024-11-26 13:27:27.181975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:38.727 [2024-11-26 13:27:27.185736] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:38.727 pt3 00:14:38.727 [2024-11-26 13:27:27.185880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:38.727 [2024-11-26 13:27:27.186085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.727 "name": "raid_bdev1", 00:14:38.727 "uuid": "1b7995fc-3eba-484d-a674-bb4b2833ce16", 00:14:38.727 "strip_size_kb": 64, 00:14:38.727 "state": "online", 00:14:38.727 "raid_level": "raid5f", 00:14:38.727 "superblock": true, 00:14:38.727 "num_base_bdevs": 3, 00:14:38.727 "num_base_bdevs_discovered": 3, 00:14:38.727 "num_base_bdevs_operational": 3, 00:14:38.727 "base_bdevs_list": [ 00:14:38.727 { 00:14:38.727 "name": "pt1", 00:14:38.727 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.727 "is_configured": true, 00:14:38.727 "data_offset": 2048, 00:14:38.727 "data_size": 63488 00:14:38.727 }, 00:14:38.727 { 00:14:38.727 "name": "pt2", 00:14:38.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.727 "is_configured": true, 00:14:38.727 "data_offset": 2048, 00:14:38.727 "data_size": 63488 00:14:38.727 }, 00:14:38.727 { 00:14:38.727 "name": "pt3", 00:14:38.727 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.727 "is_configured": true, 00:14:38.727 "data_offset": 2048, 00:14:38.727 "data_size": 63488 00:14:38.727 } 00:14:38.727 ] 00:14:38.727 }' 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.727 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.297 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:39.297 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:39.297 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:39.297 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:39.297 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:39.297 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:39.297 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:39.297 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:39.297 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.297 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.297 [2024-11-26 13:27:27.714535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.297 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.297 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.297 "name": "raid_bdev1", 00:14:39.297 "aliases": [ 00:14:39.297 "1b7995fc-3eba-484d-a674-bb4b2833ce16" 00:14:39.297 ], 00:14:39.297 "product_name": "Raid Volume", 00:14:39.297 "block_size": 512, 00:14:39.297 "num_blocks": 126976, 00:14:39.297 "uuid": "1b7995fc-3eba-484d-a674-bb4b2833ce16", 00:14:39.297 "assigned_rate_limits": { 00:14:39.297 "rw_ios_per_sec": 0, 00:14:39.297 "rw_mbytes_per_sec": 0, 00:14:39.297 "r_mbytes_per_sec": 0, 00:14:39.297 "w_mbytes_per_sec": 0 00:14:39.297 }, 00:14:39.297 "claimed": false, 00:14:39.297 "zoned": false, 00:14:39.297 "supported_io_types": { 00:14:39.297 "read": true, 00:14:39.297 "write": true, 00:14:39.297 "unmap": false, 00:14:39.297 "flush": false, 00:14:39.297 "reset": true, 00:14:39.297 "nvme_admin": false, 00:14:39.297 "nvme_io": false, 00:14:39.297 "nvme_io_md": false, 00:14:39.297 "write_zeroes": true, 00:14:39.297 "zcopy": false, 00:14:39.297 "get_zone_info": false, 00:14:39.297 "zone_management": false, 00:14:39.297 "zone_append": false, 00:14:39.297 "compare": false, 00:14:39.297 "compare_and_write": false, 00:14:39.297 "abort": false, 00:14:39.297 "seek_hole": false, 00:14:39.297 "seek_data": false, 00:14:39.297 "copy": false, 00:14:39.297 "nvme_iov_md": false 00:14:39.297 }, 00:14:39.297 "driver_specific": { 00:14:39.297 "raid": { 00:14:39.297 "uuid": "1b7995fc-3eba-484d-a674-bb4b2833ce16", 00:14:39.297 "strip_size_kb": 64, 00:14:39.297 "state": "online", 00:14:39.297 "raid_level": "raid5f", 00:14:39.297 "superblock": true, 00:14:39.297 "num_base_bdevs": 3, 00:14:39.297 "num_base_bdevs_discovered": 3, 00:14:39.297 "num_base_bdevs_operational": 3, 00:14:39.297 "base_bdevs_list": [ 00:14:39.297 { 00:14:39.297 "name": "pt1", 00:14:39.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:39.297 "is_configured": true, 00:14:39.297 "data_offset": 2048, 00:14:39.297 "data_size": 63488 00:14:39.297 }, 00:14:39.297 { 00:14:39.297 "name": "pt2", 00:14:39.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.297 "is_configured": true, 00:14:39.297 "data_offset": 2048, 00:14:39.297 "data_size": 63488 00:14:39.297 }, 00:14:39.297 { 00:14:39.297 "name": "pt3", 00:14:39.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.297 "is_configured": true, 00:14:39.297 "data_offset": 2048, 00:14:39.297 "data_size": 63488 00:14:39.297 } 00:14:39.297 ] 00:14:39.297 } 00:14:39.297 } 00:14:39.297 }' 00:14:39.297 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.297 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:39.297 pt2 00:14:39.297 pt3' 00:14:39.297 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.557 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.558 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.558 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.558 13:27:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:39.558 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.558 13:27:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.558 [2024-11-26 13:27:28.042571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1b7995fc-3eba-484d-a674-bb4b2833ce16 '!=' 1b7995fc-3eba-484d-a674-bb4b2833ce16 ']' 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.558 [2024-11-26 13:27:28.094472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.558 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.817 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.817 "name": "raid_bdev1", 00:14:39.817 "uuid": "1b7995fc-3eba-484d-a674-bb4b2833ce16", 00:14:39.817 "strip_size_kb": 64, 00:14:39.817 "state": "online", 00:14:39.817 "raid_level": "raid5f", 00:14:39.817 "superblock": true, 00:14:39.817 "num_base_bdevs": 3, 00:14:39.817 "num_base_bdevs_discovered": 2, 00:14:39.817 "num_base_bdevs_operational": 2, 00:14:39.817 "base_bdevs_list": [ 00:14:39.817 { 00:14:39.817 "name": null, 00:14:39.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.817 "is_configured": false, 00:14:39.817 "data_offset": 0, 00:14:39.817 "data_size": 63488 00:14:39.817 }, 00:14:39.817 { 00:14:39.817 "name": "pt2", 00:14:39.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.817 "is_configured": true, 00:14:39.817 "data_offset": 2048, 00:14:39.817 "data_size": 63488 00:14:39.817 }, 00:14:39.817 { 00:14:39.817 "name": "pt3", 00:14:39.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.817 "is_configured": true, 00:14:39.817 "data_offset": 2048, 00:14:39.817 "data_size": 63488 00:14:39.817 } 00:14:39.817 ] 00:14:39.817 }' 00:14:39.817 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.817 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.076 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:40.076 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.076 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.076 [2024-11-26 13:27:28.626557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.076 [2024-11-26 13:27:28.626583] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.076 [2024-11-26 13:27:28.626651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.076 [2024-11-26 13:27:28.626704] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.076 [2024-11-26 13:27:28.626723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:40.076 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.076 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.076 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.076 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.076 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.335 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.335 [2024-11-26 13:27:28.706553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:40.335 [2024-11-26 13:27:28.706603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.335 [2024-11-26 13:27:28.706623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:40.335 [2024-11-26 13:27:28.706636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.335 [2024-11-26 13:27:28.708895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.335 [2024-11-26 13:27:28.708939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:40.336 [2024-11-26 13:27:28.709006] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:40.336 [2024-11-26 13:27:28.709053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:40.336 pt2 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.336 "name": "raid_bdev1", 00:14:40.336 "uuid": "1b7995fc-3eba-484d-a674-bb4b2833ce16", 00:14:40.336 "strip_size_kb": 64, 00:14:40.336 "state": "configuring", 00:14:40.336 "raid_level": "raid5f", 00:14:40.336 "superblock": true, 00:14:40.336 "num_base_bdevs": 3, 00:14:40.336 "num_base_bdevs_discovered": 1, 00:14:40.336 "num_base_bdevs_operational": 2, 00:14:40.336 "base_bdevs_list": [ 00:14:40.336 { 00:14:40.336 "name": null, 00:14:40.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.336 "is_configured": false, 00:14:40.336 "data_offset": 2048, 00:14:40.336 "data_size": 63488 00:14:40.336 }, 00:14:40.336 { 00:14:40.336 "name": "pt2", 00:14:40.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.336 "is_configured": true, 00:14:40.336 "data_offset": 2048, 00:14:40.336 "data_size": 63488 00:14:40.336 }, 00:14:40.336 { 00:14:40.336 "name": null, 00:14:40.336 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.336 "is_configured": false, 00:14:40.336 "data_offset": 2048, 00:14:40.336 "data_size": 63488 00:14:40.336 } 00:14:40.336 ] 00:14:40.336 }' 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.336 13:27:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.903 [2024-11-26 13:27:29.246714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:40.903 [2024-11-26 13:27:29.246770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.903 [2024-11-26 13:27:29.246793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:40.903 [2024-11-26 13:27:29.246807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.903 [2024-11-26 13:27:29.247222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.903 [2024-11-26 13:27:29.247264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:40.903 [2024-11-26 13:27:29.247344] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:40.903 [2024-11-26 13:27:29.247387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:40.903 [2024-11-26 13:27:29.247498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:40.903 [2024-11-26 13:27:29.247517] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:40.903 [2024-11-26 13:27:29.247802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:40.903 [2024-11-26 13:27:29.251826] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:40.903 pt3 00:14:40.903 [2024-11-26 13:27:29.251976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:40.903 [2024-11-26 13:27:29.252346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.903 "name": "raid_bdev1", 00:14:40.903 "uuid": "1b7995fc-3eba-484d-a674-bb4b2833ce16", 00:14:40.903 "strip_size_kb": 64, 00:14:40.903 "state": "online", 00:14:40.903 "raid_level": "raid5f", 00:14:40.903 "superblock": true, 00:14:40.903 "num_base_bdevs": 3, 00:14:40.903 "num_base_bdevs_discovered": 2, 00:14:40.903 "num_base_bdevs_operational": 2, 00:14:40.903 "base_bdevs_list": [ 00:14:40.903 { 00:14:40.903 "name": null, 00:14:40.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.903 "is_configured": false, 00:14:40.903 "data_offset": 2048, 00:14:40.903 "data_size": 63488 00:14:40.903 }, 00:14:40.903 { 00:14:40.903 "name": "pt2", 00:14:40.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.903 "is_configured": true, 00:14:40.903 "data_offset": 2048, 00:14:40.903 "data_size": 63488 00:14:40.903 }, 00:14:40.903 { 00:14:40.903 "name": "pt3", 00:14:40.903 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.903 "is_configured": true, 00:14:40.903 "data_offset": 2048, 00:14:40.903 "data_size": 63488 00:14:40.903 } 00:14:40.903 ] 00:14:40.903 }' 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.903 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.469 [2024-11-26 13:27:29.776285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.469 [2024-11-26 13:27:29.776317] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.469 [2024-11-26 13:27:29.776371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.469 [2024-11-26 13:27:29.776441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.469 [2024-11-26 13:27:29.776458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.469 [2024-11-26 13:27:29.852314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.469 [2024-11-26 13:27:29.852367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.469 [2024-11-26 13:27:29.852390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:41.469 [2024-11-26 13:27:29.852402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.469 [2024-11-26 13:27:29.854651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.469 [2024-11-26 13:27:29.854689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.469 [2024-11-26 13:27:29.854760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:41.469 [2024-11-26 13:27:29.854803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:41.469 [2024-11-26 13:27:29.854941] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:41.469 [2024-11-26 13:27:29.854956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.469 [2024-11-26 13:27:29.854974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:41.469 [2024-11-26 13:27:29.855032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.469 pt1 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.469 "name": "raid_bdev1", 00:14:41.469 "uuid": "1b7995fc-3eba-484d-a674-bb4b2833ce16", 00:14:41.469 "strip_size_kb": 64, 00:14:41.469 "state": "configuring", 00:14:41.469 "raid_level": "raid5f", 00:14:41.469 "superblock": true, 00:14:41.469 "num_base_bdevs": 3, 00:14:41.469 "num_base_bdevs_discovered": 1, 00:14:41.469 "num_base_bdevs_operational": 2, 00:14:41.469 "base_bdevs_list": [ 00:14:41.469 { 00:14:41.469 "name": null, 00:14:41.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.469 "is_configured": false, 00:14:41.469 "data_offset": 2048, 00:14:41.469 "data_size": 63488 00:14:41.469 }, 00:14:41.469 { 00:14:41.469 "name": "pt2", 00:14:41.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.469 "is_configured": true, 00:14:41.469 "data_offset": 2048, 00:14:41.469 "data_size": 63488 00:14:41.469 }, 00:14:41.469 { 00:14:41.469 "name": null, 00:14:41.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.469 "is_configured": false, 00:14:41.469 "data_offset": 2048, 00:14:41.469 "data_size": 63488 00:14:41.469 } 00:14:41.469 ] 00:14:41.469 }' 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.469 13:27:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.040 [2024-11-26 13:27:30.444450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:42.040 [2024-11-26 13:27:30.444518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.040 [2024-11-26 13:27:30.444542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:42.040 [2024-11-26 13:27:30.444554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.040 [2024-11-26 13:27:30.444952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.040 [2024-11-26 13:27:30.444974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:42.040 [2024-11-26 13:27:30.445039] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:42.040 [2024-11-26 13:27:30.445063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:42.040 [2024-11-26 13:27:30.445172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:42.040 [2024-11-26 13:27:30.445186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:42.040 [2024-11-26 13:27:30.445478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:42.040 [2024-11-26 13:27:30.449555] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:42.040 pt3 00:14:42.040 [2024-11-26 13:27:30.449719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:42.040 [2024-11-26 13:27:30.449974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.040 "name": "raid_bdev1", 00:14:42.040 "uuid": "1b7995fc-3eba-484d-a674-bb4b2833ce16", 00:14:42.040 "strip_size_kb": 64, 00:14:42.040 "state": "online", 00:14:42.040 "raid_level": "raid5f", 00:14:42.040 "superblock": true, 00:14:42.040 "num_base_bdevs": 3, 00:14:42.040 "num_base_bdevs_discovered": 2, 00:14:42.040 "num_base_bdevs_operational": 2, 00:14:42.040 "base_bdevs_list": [ 00:14:42.040 { 00:14:42.040 "name": null, 00:14:42.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.040 "is_configured": false, 00:14:42.040 "data_offset": 2048, 00:14:42.040 "data_size": 63488 00:14:42.040 }, 00:14:42.040 { 00:14:42.040 "name": "pt2", 00:14:42.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.040 "is_configured": true, 00:14:42.040 "data_offset": 2048, 00:14:42.040 "data_size": 63488 00:14:42.040 }, 00:14:42.040 { 00:14:42.040 "name": "pt3", 00:14:42.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.040 "is_configured": true, 00:14:42.040 "data_offset": 2048, 00:14:42.040 "data_size": 63488 00:14:42.040 } 00:14:42.040 ] 00:14:42.040 }' 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.040 13:27:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.609 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:42.609 13:27:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:42.609 13:27:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.609 13:27:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.609 13:27:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.609 [2024-11-26 13:27:31.030430] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1b7995fc-3eba-484d-a674-bb4b2833ce16 '!=' 1b7995fc-3eba-484d-a674-bb4b2833ce16 ']' 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80763 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 80763 ']' 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 80763 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80763 00:14:42.609 killing process with pid 80763 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80763' 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 80763 00:14:42.609 [2024-11-26 13:27:31.104639] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.609 [2024-11-26 13:27:31.104704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.609 [2024-11-26 13:27:31.104756] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.609 13:27:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 80763 00:14:42.609 [2024-11-26 13:27:31.104773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:42.869 [2024-11-26 13:27:31.308112] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.865 13:27:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:43.865 00:14:43.865 real 0m8.432s 00:14:43.865 user 0m14.040s 00:14:43.865 sys 0m1.226s 00:14:43.865 ************************************ 00:14:43.865 END TEST raid5f_superblock_test 00:14:43.865 ************************************ 00:14:43.865 13:27:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.865 13:27:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.865 13:27:32 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:43.865 13:27:32 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:43.865 13:27:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:43.865 13:27:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.865 13:27:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.865 ************************************ 00:14:43.865 START TEST raid5f_rebuild_test 00:14:43.865 ************************************ 00:14:43.865 13:27:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:14:43.865 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:43.865 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:43.865 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:43.865 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:43.865 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:43.865 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:43.865 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.865 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:43.865 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81207 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81207 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81207 ']' 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.866 13:27:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.866 [2024-11-26 13:27:32.327296] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:14:43.866 [2024-11-26 13:27:32.327726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81207 ] 00:14:43.866 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:43.866 Zero copy mechanism will not be used. 00:14:44.132 [2024-11-26 13:27:32.504349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.132 [2024-11-26 13:27:32.602468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.390 [2024-11-26 13:27:32.773822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.390 [2024-11-26 13:27:32.773858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.960 BaseBdev1_malloc 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.960 [2024-11-26 13:27:33.308264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:44.960 [2024-11-26 13:27:33.308362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.960 [2024-11-26 13:27:33.308393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:44.960 [2024-11-26 13:27:33.308411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.960 [2024-11-26 13:27:33.310841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.960 [2024-11-26 13:27:33.310902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.960 BaseBdev1 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.960 BaseBdev2_malloc 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.960 [2024-11-26 13:27:33.349834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:44.960 [2024-11-26 13:27:33.349897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.960 [2024-11-26 13:27:33.349922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:44.960 [2024-11-26 13:27:33.349939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.960 [2024-11-26 13:27:33.352311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.960 [2024-11-26 13:27:33.352523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.960 BaseBdev2 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.960 BaseBdev3_malloc 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.960 [2024-11-26 13:27:33.402119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:44.960 [2024-11-26 13:27:33.402176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.960 [2024-11-26 13:27:33.402204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:44.960 [2024-11-26 13:27:33.402220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.960 [2024-11-26 13:27:33.404602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.960 [2024-11-26 13:27:33.404649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:44.960 BaseBdev3 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.960 spare_malloc 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.960 spare_delay 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.960 [2024-11-26 13:27:33.455617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:44.960 [2024-11-26 13:27:33.455820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.960 [2024-11-26 13:27:33.455854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:44.960 [2024-11-26 13:27:33.455871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.960 [2024-11-26 13:27:33.458314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.960 [2024-11-26 13:27:33.458360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:44.960 spare 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.960 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.960 [2024-11-26 13:27:33.463690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.961 [2024-11-26 13:27:33.465742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.961 [2024-11-26 13:27:33.465826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.961 [2024-11-26 13:27:33.465963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:44.961 [2024-11-26 13:27:33.465979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:44.961 [2024-11-26 13:27:33.466251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:44.961 [2024-11-26 13:27:33.470553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:44.961 [2024-11-26 13:27:33.470695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:44.961 [2024-11-26 13:27:33.471049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.961 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.220 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.220 "name": "raid_bdev1", 00:14:45.220 "uuid": "3828efd9-52da-4494-bf73-fdfa75ffb869", 00:14:45.220 "strip_size_kb": 64, 00:14:45.220 "state": "online", 00:14:45.220 "raid_level": "raid5f", 00:14:45.220 "superblock": false, 00:14:45.220 "num_base_bdevs": 3, 00:14:45.220 "num_base_bdevs_discovered": 3, 00:14:45.220 "num_base_bdevs_operational": 3, 00:14:45.220 "base_bdevs_list": [ 00:14:45.220 { 00:14:45.220 "name": "BaseBdev1", 00:14:45.220 "uuid": "8b2a682b-69c2-565e-b07c-7013fee675d6", 00:14:45.220 "is_configured": true, 00:14:45.220 "data_offset": 0, 00:14:45.220 "data_size": 65536 00:14:45.220 }, 00:14:45.220 { 00:14:45.220 "name": "BaseBdev2", 00:14:45.220 "uuid": "7e6c8cfd-2dde-5157-abf1-eff37194f676", 00:14:45.220 "is_configured": true, 00:14:45.220 "data_offset": 0, 00:14:45.220 "data_size": 65536 00:14:45.220 }, 00:14:45.220 { 00:14:45.220 "name": "BaseBdev3", 00:14:45.220 "uuid": "60a0b3e0-57a1-5706-ac50-e31a8a99d2e6", 00:14:45.220 "is_configured": true, 00:14:45.220 "data_offset": 0, 00:14:45.220 "data_size": 65536 00:14:45.220 } 00:14:45.220 ] 00:14:45.220 }' 00:14:45.220 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.220 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.479 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:45.479 13:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:45.479 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.479 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.479 [2024-11-26 13:27:33.976419] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.479 13:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.479 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:45.479 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:45.479 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.479 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.479 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.479 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.739 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:45.739 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:45.739 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:45.739 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:45.739 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:45.739 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.739 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:45.739 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.739 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:45.739 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.739 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:45.739 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.739 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.739 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:45.999 [2024-11-26 13:27:34.360315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:45.999 /dev/nbd0 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.999 1+0 records in 00:14:45.999 1+0 records out 00:14:45.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519773 s, 7.9 MB/s 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:45.999 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:46.567 512+0 records in 00:14:46.567 512+0 records out 00:14:46.567 67108864 bytes (67 MB, 64 MiB) copied, 0.445292 s, 151 MB/s 00:14:46.567 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:46.567 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.567 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:46.567 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.567 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:46.567 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.567 13:27:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:46.826 [2024-11-26 13:27:35.158661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.826 [2024-11-26 13:27:35.184297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.826 "name": "raid_bdev1", 00:14:46.826 "uuid": "3828efd9-52da-4494-bf73-fdfa75ffb869", 00:14:46.826 "strip_size_kb": 64, 00:14:46.826 "state": "online", 00:14:46.826 "raid_level": "raid5f", 00:14:46.826 "superblock": false, 00:14:46.826 "num_base_bdevs": 3, 00:14:46.826 "num_base_bdevs_discovered": 2, 00:14:46.826 "num_base_bdevs_operational": 2, 00:14:46.826 "base_bdevs_list": [ 00:14:46.826 { 00:14:46.826 "name": null, 00:14:46.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.826 "is_configured": false, 00:14:46.826 "data_offset": 0, 00:14:46.826 "data_size": 65536 00:14:46.826 }, 00:14:46.826 { 00:14:46.826 "name": "BaseBdev2", 00:14:46.826 "uuid": "7e6c8cfd-2dde-5157-abf1-eff37194f676", 00:14:46.826 "is_configured": true, 00:14:46.826 "data_offset": 0, 00:14:46.826 "data_size": 65536 00:14:46.826 }, 00:14:46.826 { 00:14:46.826 "name": "BaseBdev3", 00:14:46.826 "uuid": "60a0b3e0-57a1-5706-ac50-e31a8a99d2e6", 00:14:46.826 "is_configured": true, 00:14:46.826 "data_offset": 0, 00:14:46.826 "data_size": 65536 00:14:46.826 } 00:14:46.826 ] 00:14:46.826 }' 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.826 13:27:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.085 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:47.085 13:27:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.085 13:27:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.085 [2024-11-26 13:27:35.648396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.344 [2024-11-26 13:27:35.661371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:47.344 13:27:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.344 13:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:47.344 [2024-11-26 13:27:35.667409] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:48.281 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.281 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.281 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.281 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.281 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.281 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.281 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.281 13:27:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.281 13:27:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.281 13:27:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.281 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.281 "name": "raid_bdev1", 00:14:48.281 "uuid": "3828efd9-52da-4494-bf73-fdfa75ffb869", 00:14:48.281 "strip_size_kb": 64, 00:14:48.282 "state": "online", 00:14:48.282 "raid_level": "raid5f", 00:14:48.282 "superblock": false, 00:14:48.282 "num_base_bdevs": 3, 00:14:48.282 "num_base_bdevs_discovered": 3, 00:14:48.282 "num_base_bdevs_operational": 3, 00:14:48.282 "process": { 00:14:48.282 "type": "rebuild", 00:14:48.282 "target": "spare", 00:14:48.282 "progress": { 00:14:48.282 "blocks": 18432, 00:14:48.282 "percent": 14 00:14:48.282 } 00:14:48.282 }, 00:14:48.282 "base_bdevs_list": [ 00:14:48.282 { 00:14:48.282 "name": "spare", 00:14:48.282 "uuid": "743bec0e-efb7-5834-a347-0f7311b1ec76", 00:14:48.282 "is_configured": true, 00:14:48.282 "data_offset": 0, 00:14:48.282 "data_size": 65536 00:14:48.282 }, 00:14:48.282 { 00:14:48.282 "name": "BaseBdev2", 00:14:48.282 "uuid": "7e6c8cfd-2dde-5157-abf1-eff37194f676", 00:14:48.282 "is_configured": true, 00:14:48.282 "data_offset": 0, 00:14:48.282 "data_size": 65536 00:14:48.282 }, 00:14:48.282 { 00:14:48.282 "name": "BaseBdev3", 00:14:48.282 "uuid": "60a0b3e0-57a1-5706-ac50-e31a8a99d2e6", 00:14:48.282 "is_configured": true, 00:14:48.282 "data_offset": 0, 00:14:48.282 "data_size": 65536 00:14:48.282 } 00:14:48.282 ] 00:14:48.282 }' 00:14:48.282 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.282 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.282 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.282 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.282 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:48.282 13:27:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.282 13:27:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.282 [2024-11-26 13:27:36.832570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.541 [2024-11-26 13:27:36.878417] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:48.541 [2024-11-26 13:27:36.878497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.541 [2024-11-26 13:27:36.878524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.541 [2024-11-26 13:27:36.878535] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.541 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.542 "name": "raid_bdev1", 00:14:48.542 "uuid": "3828efd9-52da-4494-bf73-fdfa75ffb869", 00:14:48.542 "strip_size_kb": 64, 00:14:48.542 "state": "online", 00:14:48.542 "raid_level": "raid5f", 00:14:48.542 "superblock": false, 00:14:48.542 "num_base_bdevs": 3, 00:14:48.542 "num_base_bdevs_discovered": 2, 00:14:48.542 "num_base_bdevs_operational": 2, 00:14:48.542 "base_bdevs_list": [ 00:14:48.542 { 00:14:48.542 "name": null, 00:14:48.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.542 "is_configured": false, 00:14:48.542 "data_offset": 0, 00:14:48.542 "data_size": 65536 00:14:48.542 }, 00:14:48.542 { 00:14:48.542 "name": "BaseBdev2", 00:14:48.542 "uuid": "7e6c8cfd-2dde-5157-abf1-eff37194f676", 00:14:48.542 "is_configured": true, 00:14:48.542 "data_offset": 0, 00:14:48.542 "data_size": 65536 00:14:48.542 }, 00:14:48.542 { 00:14:48.542 "name": "BaseBdev3", 00:14:48.542 "uuid": "60a0b3e0-57a1-5706-ac50-e31a8a99d2e6", 00:14:48.542 "is_configured": true, 00:14:48.542 "data_offset": 0, 00:14:48.542 "data_size": 65536 00:14:48.542 } 00:14:48.542 ] 00:14:48.542 }' 00:14:48.542 13:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.542 13:27:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.110 "name": "raid_bdev1", 00:14:49.110 "uuid": "3828efd9-52da-4494-bf73-fdfa75ffb869", 00:14:49.110 "strip_size_kb": 64, 00:14:49.110 "state": "online", 00:14:49.110 "raid_level": "raid5f", 00:14:49.110 "superblock": false, 00:14:49.110 "num_base_bdevs": 3, 00:14:49.110 "num_base_bdevs_discovered": 2, 00:14:49.110 "num_base_bdevs_operational": 2, 00:14:49.110 "base_bdevs_list": [ 00:14:49.110 { 00:14:49.110 "name": null, 00:14:49.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.110 "is_configured": false, 00:14:49.110 "data_offset": 0, 00:14:49.110 "data_size": 65536 00:14:49.110 }, 00:14:49.110 { 00:14:49.110 "name": "BaseBdev2", 00:14:49.110 "uuid": "7e6c8cfd-2dde-5157-abf1-eff37194f676", 00:14:49.110 "is_configured": true, 00:14:49.110 "data_offset": 0, 00:14:49.110 "data_size": 65536 00:14:49.110 }, 00:14:49.110 { 00:14:49.110 "name": "BaseBdev3", 00:14:49.110 "uuid": "60a0b3e0-57a1-5706-ac50-e31a8a99d2e6", 00:14:49.110 "is_configured": true, 00:14:49.110 "data_offset": 0, 00:14:49.110 "data_size": 65536 00:14:49.110 } 00:14:49.110 ] 00:14:49.110 }' 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.110 [2024-11-26 13:27:37.579166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.110 [2024-11-26 13:27:37.589803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.110 13:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:49.110 [2024-11-26 13:27:37.595889] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:50.047 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.047 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.047 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.047 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.047 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.047 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.047 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.047 13:27:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.047 13:27:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.305 "name": "raid_bdev1", 00:14:50.305 "uuid": "3828efd9-52da-4494-bf73-fdfa75ffb869", 00:14:50.305 "strip_size_kb": 64, 00:14:50.305 "state": "online", 00:14:50.305 "raid_level": "raid5f", 00:14:50.305 "superblock": false, 00:14:50.305 "num_base_bdevs": 3, 00:14:50.305 "num_base_bdevs_discovered": 3, 00:14:50.305 "num_base_bdevs_operational": 3, 00:14:50.305 "process": { 00:14:50.305 "type": "rebuild", 00:14:50.305 "target": "spare", 00:14:50.305 "progress": { 00:14:50.305 "blocks": 18432, 00:14:50.305 "percent": 14 00:14:50.305 } 00:14:50.305 }, 00:14:50.305 "base_bdevs_list": [ 00:14:50.305 { 00:14:50.305 "name": "spare", 00:14:50.305 "uuid": "743bec0e-efb7-5834-a347-0f7311b1ec76", 00:14:50.305 "is_configured": true, 00:14:50.305 "data_offset": 0, 00:14:50.305 "data_size": 65536 00:14:50.305 }, 00:14:50.305 { 00:14:50.305 "name": "BaseBdev2", 00:14:50.305 "uuid": "7e6c8cfd-2dde-5157-abf1-eff37194f676", 00:14:50.305 "is_configured": true, 00:14:50.305 "data_offset": 0, 00:14:50.305 "data_size": 65536 00:14:50.305 }, 00:14:50.305 { 00:14:50.305 "name": "BaseBdev3", 00:14:50.305 "uuid": "60a0b3e0-57a1-5706-ac50-e31a8a99d2e6", 00:14:50.305 "is_configured": true, 00:14:50.305 "data_offset": 0, 00:14:50.305 "data_size": 65536 00:14:50.305 } 00:14:50.305 ] 00:14:50.305 }' 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=560 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.305 13:27:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.306 13:27:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.306 13:27:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.306 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.306 "name": "raid_bdev1", 00:14:50.306 "uuid": "3828efd9-52da-4494-bf73-fdfa75ffb869", 00:14:50.306 "strip_size_kb": 64, 00:14:50.306 "state": "online", 00:14:50.306 "raid_level": "raid5f", 00:14:50.306 "superblock": false, 00:14:50.306 "num_base_bdevs": 3, 00:14:50.306 "num_base_bdevs_discovered": 3, 00:14:50.306 "num_base_bdevs_operational": 3, 00:14:50.306 "process": { 00:14:50.306 "type": "rebuild", 00:14:50.306 "target": "spare", 00:14:50.306 "progress": { 00:14:50.306 "blocks": 22528, 00:14:50.306 "percent": 17 00:14:50.306 } 00:14:50.306 }, 00:14:50.306 "base_bdevs_list": [ 00:14:50.306 { 00:14:50.306 "name": "spare", 00:14:50.306 "uuid": "743bec0e-efb7-5834-a347-0f7311b1ec76", 00:14:50.306 "is_configured": true, 00:14:50.306 "data_offset": 0, 00:14:50.306 "data_size": 65536 00:14:50.306 }, 00:14:50.306 { 00:14:50.306 "name": "BaseBdev2", 00:14:50.306 "uuid": "7e6c8cfd-2dde-5157-abf1-eff37194f676", 00:14:50.306 "is_configured": true, 00:14:50.306 "data_offset": 0, 00:14:50.306 "data_size": 65536 00:14:50.306 }, 00:14:50.306 { 00:14:50.306 "name": "BaseBdev3", 00:14:50.306 "uuid": "60a0b3e0-57a1-5706-ac50-e31a8a99d2e6", 00:14:50.306 "is_configured": true, 00:14:50.306 "data_offset": 0, 00:14:50.306 "data_size": 65536 00:14:50.306 } 00:14:50.306 ] 00:14:50.306 }' 00:14:50.306 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.306 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.306 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.563 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.564 13:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:51.498 13:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:51.498 13:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.498 13:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.498 13:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.498 13:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.498 13:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.498 13:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.498 13:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.498 13:27:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.498 13:27:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.498 13:27:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.498 13:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.498 "name": "raid_bdev1", 00:14:51.498 "uuid": "3828efd9-52da-4494-bf73-fdfa75ffb869", 00:14:51.498 "strip_size_kb": 64, 00:14:51.498 "state": "online", 00:14:51.498 "raid_level": "raid5f", 00:14:51.498 "superblock": false, 00:14:51.498 "num_base_bdevs": 3, 00:14:51.498 "num_base_bdevs_discovered": 3, 00:14:51.498 "num_base_bdevs_operational": 3, 00:14:51.498 "process": { 00:14:51.498 "type": "rebuild", 00:14:51.498 "target": "spare", 00:14:51.498 "progress": { 00:14:51.498 "blocks": 47104, 00:14:51.498 "percent": 35 00:14:51.498 } 00:14:51.498 }, 00:14:51.498 "base_bdevs_list": [ 00:14:51.498 { 00:14:51.498 "name": "spare", 00:14:51.498 "uuid": "743bec0e-efb7-5834-a347-0f7311b1ec76", 00:14:51.498 "is_configured": true, 00:14:51.498 "data_offset": 0, 00:14:51.498 "data_size": 65536 00:14:51.498 }, 00:14:51.498 { 00:14:51.498 "name": "BaseBdev2", 00:14:51.498 "uuid": "7e6c8cfd-2dde-5157-abf1-eff37194f676", 00:14:51.498 "is_configured": true, 00:14:51.498 "data_offset": 0, 00:14:51.498 "data_size": 65536 00:14:51.498 }, 00:14:51.498 { 00:14:51.498 "name": "BaseBdev3", 00:14:51.498 "uuid": "60a0b3e0-57a1-5706-ac50-e31a8a99d2e6", 00:14:51.498 "is_configured": true, 00:14:51.498 "data_offset": 0, 00:14:51.498 "data_size": 65536 00:14:51.498 } 00:14:51.498 ] 00:14:51.498 }' 00:14:51.498 13:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.498 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.498 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.757 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.757 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.693 "name": "raid_bdev1", 00:14:52.693 "uuid": "3828efd9-52da-4494-bf73-fdfa75ffb869", 00:14:52.693 "strip_size_kb": 64, 00:14:52.693 "state": "online", 00:14:52.693 "raid_level": "raid5f", 00:14:52.693 "superblock": false, 00:14:52.693 "num_base_bdevs": 3, 00:14:52.693 "num_base_bdevs_discovered": 3, 00:14:52.693 "num_base_bdevs_operational": 3, 00:14:52.693 "process": { 00:14:52.693 "type": "rebuild", 00:14:52.693 "target": "spare", 00:14:52.693 "progress": { 00:14:52.693 "blocks": 69632, 00:14:52.693 "percent": 53 00:14:52.693 } 00:14:52.693 }, 00:14:52.693 "base_bdevs_list": [ 00:14:52.693 { 00:14:52.693 "name": "spare", 00:14:52.693 "uuid": "743bec0e-efb7-5834-a347-0f7311b1ec76", 00:14:52.693 "is_configured": true, 00:14:52.693 "data_offset": 0, 00:14:52.693 "data_size": 65536 00:14:52.693 }, 00:14:52.693 { 00:14:52.693 "name": "BaseBdev2", 00:14:52.693 "uuid": "7e6c8cfd-2dde-5157-abf1-eff37194f676", 00:14:52.693 "is_configured": true, 00:14:52.693 "data_offset": 0, 00:14:52.693 "data_size": 65536 00:14:52.693 }, 00:14:52.693 { 00:14:52.693 "name": "BaseBdev3", 00:14:52.693 "uuid": "60a0b3e0-57a1-5706-ac50-e31a8a99d2e6", 00:14:52.693 "is_configured": true, 00:14:52.693 "data_offset": 0, 00:14:52.693 "data_size": 65536 00:14:52.693 } 00:14:52.693 ] 00:14:52.693 }' 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.693 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.694 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.071 "name": "raid_bdev1", 00:14:54.071 "uuid": "3828efd9-52da-4494-bf73-fdfa75ffb869", 00:14:54.071 "strip_size_kb": 64, 00:14:54.071 "state": "online", 00:14:54.071 "raid_level": "raid5f", 00:14:54.071 "superblock": false, 00:14:54.071 "num_base_bdevs": 3, 00:14:54.071 "num_base_bdevs_discovered": 3, 00:14:54.071 "num_base_bdevs_operational": 3, 00:14:54.071 "process": { 00:14:54.071 "type": "rebuild", 00:14:54.071 "target": "spare", 00:14:54.071 "progress": { 00:14:54.071 "blocks": 94208, 00:14:54.071 "percent": 71 00:14:54.071 } 00:14:54.071 }, 00:14:54.071 "base_bdevs_list": [ 00:14:54.071 { 00:14:54.071 "name": "spare", 00:14:54.071 "uuid": "743bec0e-efb7-5834-a347-0f7311b1ec76", 00:14:54.071 "is_configured": true, 00:14:54.071 "data_offset": 0, 00:14:54.071 "data_size": 65536 00:14:54.071 }, 00:14:54.071 { 00:14:54.071 "name": "BaseBdev2", 00:14:54.071 "uuid": "7e6c8cfd-2dde-5157-abf1-eff37194f676", 00:14:54.071 "is_configured": true, 00:14:54.071 "data_offset": 0, 00:14:54.071 "data_size": 65536 00:14:54.071 }, 00:14:54.071 { 00:14:54.071 "name": "BaseBdev3", 00:14:54.071 "uuid": "60a0b3e0-57a1-5706-ac50-e31a8a99d2e6", 00:14:54.071 "is_configured": true, 00:14:54.071 "data_offset": 0, 00:14:54.071 "data_size": 65536 00:14:54.071 } 00:14:54.071 ] 00:14:54.071 }' 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.071 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.009 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.009 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.009 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.009 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.009 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.009 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.009 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.009 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.009 13:27:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.009 13:27:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.009 13:27:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.009 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.009 "name": "raid_bdev1", 00:14:55.009 "uuid": "3828efd9-52da-4494-bf73-fdfa75ffb869", 00:14:55.010 "strip_size_kb": 64, 00:14:55.010 "state": "online", 00:14:55.010 "raid_level": "raid5f", 00:14:55.010 "superblock": false, 00:14:55.010 "num_base_bdevs": 3, 00:14:55.010 "num_base_bdevs_discovered": 3, 00:14:55.010 "num_base_bdevs_operational": 3, 00:14:55.010 "process": { 00:14:55.010 "type": "rebuild", 00:14:55.010 "target": "spare", 00:14:55.010 "progress": { 00:14:55.010 "blocks": 116736, 00:14:55.010 "percent": 89 00:14:55.010 } 00:14:55.010 }, 00:14:55.010 "base_bdevs_list": [ 00:14:55.010 { 00:14:55.010 "name": "spare", 00:14:55.010 "uuid": "743bec0e-efb7-5834-a347-0f7311b1ec76", 00:14:55.010 "is_configured": true, 00:14:55.010 "data_offset": 0, 00:14:55.010 "data_size": 65536 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "name": "BaseBdev2", 00:14:55.010 "uuid": "7e6c8cfd-2dde-5157-abf1-eff37194f676", 00:14:55.010 "is_configured": true, 00:14:55.010 "data_offset": 0, 00:14:55.010 "data_size": 65536 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "name": "BaseBdev3", 00:14:55.010 "uuid": "60a0b3e0-57a1-5706-ac50-e31a8a99d2e6", 00:14:55.010 "is_configured": true, 00:14:55.010 "data_offset": 0, 00:14:55.010 "data_size": 65536 00:14:55.010 } 00:14:55.010 ] 00:14:55.010 }' 00:14:55.010 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.010 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.010 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.269 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.269 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.528 [2024-11-26 13:27:44.050876] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:55.528 [2024-11-26 13:27:44.051127] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:55.528 [2024-11-26 13:27:44.051198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.097 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.097 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.097 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.097 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.097 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.097 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.097 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.097 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.097 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.097 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.097 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.097 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.097 "name": "raid_bdev1", 00:14:56.097 "uuid": "3828efd9-52da-4494-bf73-fdfa75ffb869", 00:14:56.097 "strip_size_kb": 64, 00:14:56.097 "state": "online", 00:14:56.097 "raid_level": "raid5f", 00:14:56.097 "superblock": false, 00:14:56.097 "num_base_bdevs": 3, 00:14:56.097 "num_base_bdevs_discovered": 3, 00:14:56.097 "num_base_bdevs_operational": 3, 00:14:56.097 "base_bdevs_list": [ 00:14:56.097 { 00:14:56.097 "name": "spare", 00:14:56.097 "uuid": "743bec0e-efb7-5834-a347-0f7311b1ec76", 00:14:56.097 "is_configured": true, 00:14:56.097 "data_offset": 0, 00:14:56.097 "data_size": 65536 00:14:56.097 }, 00:14:56.097 { 00:14:56.097 "name": "BaseBdev2", 00:14:56.097 "uuid": "7e6c8cfd-2dde-5157-abf1-eff37194f676", 00:14:56.097 "is_configured": true, 00:14:56.097 "data_offset": 0, 00:14:56.097 "data_size": 65536 00:14:56.097 }, 00:14:56.097 { 00:14:56.097 "name": "BaseBdev3", 00:14:56.097 "uuid": "60a0b3e0-57a1-5706-ac50-e31a8a99d2e6", 00:14:56.097 "is_configured": true, 00:14:56.097 "data_offset": 0, 00:14:56.097 "data_size": 65536 00:14:56.097 } 00:14:56.097 ] 00:14:56.097 }' 00:14:56.097 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.356 "name": "raid_bdev1", 00:14:56.356 "uuid": "3828efd9-52da-4494-bf73-fdfa75ffb869", 00:14:56.356 "strip_size_kb": 64, 00:14:56.356 "state": "online", 00:14:56.356 "raid_level": "raid5f", 00:14:56.356 "superblock": false, 00:14:56.356 "num_base_bdevs": 3, 00:14:56.356 "num_base_bdevs_discovered": 3, 00:14:56.356 "num_base_bdevs_operational": 3, 00:14:56.356 "base_bdevs_list": [ 00:14:56.356 { 00:14:56.356 "name": "spare", 00:14:56.356 "uuid": "743bec0e-efb7-5834-a347-0f7311b1ec76", 00:14:56.356 "is_configured": true, 00:14:56.356 "data_offset": 0, 00:14:56.356 "data_size": 65536 00:14:56.356 }, 00:14:56.356 { 00:14:56.356 "name": "BaseBdev2", 00:14:56.356 "uuid": "7e6c8cfd-2dde-5157-abf1-eff37194f676", 00:14:56.356 "is_configured": true, 00:14:56.356 "data_offset": 0, 00:14:56.356 "data_size": 65536 00:14:56.356 }, 00:14:56.356 { 00:14:56.356 "name": "BaseBdev3", 00:14:56.356 "uuid": "60a0b3e0-57a1-5706-ac50-e31a8a99d2e6", 00:14:56.356 "is_configured": true, 00:14:56.356 "data_offset": 0, 00:14:56.356 "data_size": 65536 00:14:56.356 } 00:14:56.356 ] 00:14:56.356 }' 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.356 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.357 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.616 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.616 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.616 "name": "raid_bdev1", 00:14:56.616 "uuid": "3828efd9-52da-4494-bf73-fdfa75ffb869", 00:14:56.616 "strip_size_kb": 64, 00:14:56.616 "state": "online", 00:14:56.616 "raid_level": "raid5f", 00:14:56.616 "superblock": false, 00:14:56.616 "num_base_bdevs": 3, 00:14:56.616 "num_base_bdevs_discovered": 3, 00:14:56.616 "num_base_bdevs_operational": 3, 00:14:56.616 "base_bdevs_list": [ 00:14:56.616 { 00:14:56.616 "name": "spare", 00:14:56.616 "uuid": "743bec0e-efb7-5834-a347-0f7311b1ec76", 00:14:56.616 "is_configured": true, 00:14:56.616 "data_offset": 0, 00:14:56.616 "data_size": 65536 00:14:56.616 }, 00:14:56.616 { 00:14:56.616 "name": "BaseBdev2", 00:14:56.616 "uuid": "7e6c8cfd-2dde-5157-abf1-eff37194f676", 00:14:56.616 "is_configured": true, 00:14:56.616 "data_offset": 0, 00:14:56.616 "data_size": 65536 00:14:56.616 }, 00:14:56.616 { 00:14:56.616 "name": "BaseBdev3", 00:14:56.616 "uuid": "60a0b3e0-57a1-5706-ac50-e31a8a99d2e6", 00:14:56.616 "is_configured": true, 00:14:56.616 "data_offset": 0, 00:14:56.616 "data_size": 65536 00:14:56.616 } 00:14:56.616 ] 00:14:56.616 }' 00:14:56.616 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.616 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.875 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:56.875 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.875 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.875 [2024-11-26 13:27:45.432057] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.875 [2024-11-26 13:27:45.432084] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.875 [2024-11-26 13:27:45.432184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.875 [2024-11-26 13:27:45.432310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.875 [2024-11-26 13:27:45.432334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:56.875 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.875 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.875 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.875 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.134 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:57.134 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.134 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:57.134 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:57.134 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:57.134 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:57.134 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.134 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:57.134 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:57.134 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:57.134 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:57.134 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:57.134 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:57.134 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:57.134 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:57.394 /dev/nbd0 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.394 1+0 records in 00:14:57.394 1+0 records out 00:14:57.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318648 s, 12.9 MB/s 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:57.394 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:57.653 /dev/nbd1 00:14:57.653 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.654 1+0 records in 00:14:57.654 1+0 records out 00:14:57.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359057 s, 11.4 MB/s 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:57.654 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:57.913 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:57.913 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.913 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:57.913 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:57.913 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:57.913 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.913 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:58.172 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:58.172 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:58.172 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:58.172 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.172 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.172 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:58.172 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:58.172 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.172 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.172 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81207 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81207 ']' 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81207 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81207 00:14:58.431 killing process with pid 81207 00:14:58.431 Received shutdown signal, test time was about 60.000000 seconds 00:14:58.431 00:14:58.431 Latency(us) 00:14:58.431 [2024-11-26T13:27:47.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.431 [2024-11-26T13:27:47.001Z] =================================================================================================================== 00:14:58.431 [2024-11-26T13:27:47.001Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81207' 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81207 00:14:58.431 [2024-11-26 13:27:46.899060] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.431 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81207 00:14:58.690 [2024-11-26 13:27:47.167930] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.628 ************************************ 00:14:59.628 END TEST raid5f_rebuild_test 00:14:59.628 ************************************ 00:14:59.628 13:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:59.628 00:14:59.628 real 0m15.786s 00:14:59.628 user 0m20.252s 00:14:59.628 sys 0m1.963s 00:14:59.628 13:27:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.628 13:27:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.628 13:27:48 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:59.628 13:27:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:59.628 13:27:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.628 13:27:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:59.628 ************************************ 00:14:59.628 START TEST raid5f_rebuild_test_sb 00:14:59.628 ************************************ 00:14:59.628 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:14:59.628 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:59.628 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:59.628 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81647 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81647 00:14:59.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81647 ']' 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.629 13:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.629 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:59.629 Zero copy mechanism will not be used. 00:14:59.629 [2024-11-26 13:27:48.173851] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:14:59.629 [2024-11-26 13:27:48.174049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81647 ] 00:14:59.887 [2024-11-26 13:27:48.357767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.146 [2024-11-26 13:27:48.455965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.146 [2024-11-26 13:27:48.623185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.146 [2024-11-26 13:27:48.623257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.714 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.714 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:00.714 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:00.714 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:00.714 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.714 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.714 BaseBdev1_malloc 00:15:00.714 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.714 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:00.714 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.714 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.714 [2024-11-26 13:27:49.105761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:00.714 [2024-11-26 13:27:49.105833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.714 [2024-11-26 13:27:49.105864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:00.714 [2024-11-26 13:27:49.105880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.714 [2024-11-26 13:27:49.108417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.714 [2024-11-26 13:27:49.108638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:00.714 BaseBdev1 00:15:00.714 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.714 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:00.714 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:00.714 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.715 BaseBdev2_malloc 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.715 [2024-11-26 13:27:49.147776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:00.715 [2024-11-26 13:27:49.147836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.715 [2024-11-26 13:27:49.147860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:00.715 [2024-11-26 13:27:49.147878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.715 [2024-11-26 13:27:49.150223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.715 [2024-11-26 13:27:49.150279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:00.715 BaseBdev2 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.715 BaseBdev3_malloc 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.715 [2024-11-26 13:27:49.197811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:00.715 [2024-11-26 13:27:49.197885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.715 [2024-11-26 13:27:49.197913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:00.715 [2024-11-26 13:27:49.197930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.715 [2024-11-26 13:27:49.200448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.715 [2024-11-26 13:27:49.200495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:00.715 BaseBdev3 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.715 spare_malloc 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.715 spare_delay 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.715 [2024-11-26 13:27:49.251448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:00.715 [2024-11-26 13:27:49.251650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.715 [2024-11-26 13:27:49.251684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:00.715 [2024-11-26 13:27:49.251702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.715 [2024-11-26 13:27:49.254194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.715 [2024-11-26 13:27:49.254266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:00.715 spare 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.715 [2024-11-26 13:27:49.259530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.715 [2024-11-26 13:27:49.261522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:00.715 [2024-11-26 13:27:49.261593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.715 [2024-11-26 13:27:49.261794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:00.715 [2024-11-26 13:27:49.261812] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:00.715 [2024-11-26 13:27:49.262075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:00.715 [2024-11-26 13:27:49.266333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:00.715 [2024-11-26 13:27:49.266361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:00.715 [2024-11-26 13:27:49.266555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.715 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.974 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.974 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.974 "name": "raid_bdev1", 00:15:00.974 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:00.974 "strip_size_kb": 64, 00:15:00.974 "state": "online", 00:15:00.974 "raid_level": "raid5f", 00:15:00.974 "superblock": true, 00:15:00.974 "num_base_bdevs": 3, 00:15:00.974 "num_base_bdevs_discovered": 3, 00:15:00.974 "num_base_bdevs_operational": 3, 00:15:00.974 "base_bdevs_list": [ 00:15:00.974 { 00:15:00.974 "name": "BaseBdev1", 00:15:00.974 "uuid": "a9d29cba-91eb-5a2b-bab4-3ea3bf94138c", 00:15:00.974 "is_configured": true, 00:15:00.974 "data_offset": 2048, 00:15:00.974 "data_size": 63488 00:15:00.974 }, 00:15:00.974 { 00:15:00.974 "name": "BaseBdev2", 00:15:00.974 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:00.974 "is_configured": true, 00:15:00.974 "data_offset": 2048, 00:15:00.974 "data_size": 63488 00:15:00.974 }, 00:15:00.974 { 00:15:00.974 "name": "BaseBdev3", 00:15:00.974 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:00.974 "is_configured": true, 00:15:00.974 "data_offset": 2048, 00:15:00.974 "data_size": 63488 00:15:00.974 } 00:15:00.974 ] 00:15:00.974 }' 00:15:00.974 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.974 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.233 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:01.233 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:01.233 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.233 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.233 [2024-11-26 13:27:49.779658] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.233 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.492 13:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:01.751 [2024-11-26 13:27:50.135575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:01.751 /dev/nbd0 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.751 1+0 records in 00:15:01.751 1+0 records out 00:15:01.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597719 s, 6.9 MB/s 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:01.751 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:02.319 496+0 records in 00:15:02.319 496+0 records out 00:15:02.319 65011712 bytes (65 MB, 62 MiB) copied, 0.436577 s, 149 MB/s 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:02.319 [2024-11-26 13:27:50.836956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.319 [2024-11-26 13:27:50.866438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.319 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.320 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.320 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.320 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:02.320 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.320 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.320 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.320 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.320 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.320 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.320 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.320 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.579 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.579 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.579 "name": "raid_bdev1", 00:15:02.579 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:02.579 "strip_size_kb": 64, 00:15:02.579 "state": "online", 00:15:02.579 "raid_level": "raid5f", 00:15:02.579 "superblock": true, 00:15:02.579 "num_base_bdevs": 3, 00:15:02.579 "num_base_bdevs_discovered": 2, 00:15:02.579 "num_base_bdevs_operational": 2, 00:15:02.579 "base_bdevs_list": [ 00:15:02.579 { 00:15:02.579 "name": null, 00:15:02.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.579 "is_configured": false, 00:15:02.579 "data_offset": 0, 00:15:02.579 "data_size": 63488 00:15:02.579 }, 00:15:02.579 { 00:15:02.579 "name": "BaseBdev2", 00:15:02.579 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:02.579 "is_configured": true, 00:15:02.579 "data_offset": 2048, 00:15:02.579 "data_size": 63488 00:15:02.579 }, 00:15:02.579 { 00:15:02.579 "name": "BaseBdev3", 00:15:02.579 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:02.579 "is_configured": true, 00:15:02.579 "data_offset": 2048, 00:15:02.579 "data_size": 63488 00:15:02.579 } 00:15:02.579 ] 00:15:02.579 }' 00:15:02.579 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.579 13:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.839 13:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:02.839 13:27:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.839 13:27:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.839 [2024-11-26 13:27:51.382536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.839 [2024-11-26 13:27:51.395790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:02.839 13:27:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.839 13:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:02.839 [2024-11-26 13:27:51.402412] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:04.216 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.216 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.216 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.216 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.217 "name": "raid_bdev1", 00:15:04.217 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:04.217 "strip_size_kb": 64, 00:15:04.217 "state": "online", 00:15:04.217 "raid_level": "raid5f", 00:15:04.217 "superblock": true, 00:15:04.217 "num_base_bdevs": 3, 00:15:04.217 "num_base_bdevs_discovered": 3, 00:15:04.217 "num_base_bdevs_operational": 3, 00:15:04.217 "process": { 00:15:04.217 "type": "rebuild", 00:15:04.217 "target": "spare", 00:15:04.217 "progress": { 00:15:04.217 "blocks": 18432, 00:15:04.217 "percent": 14 00:15:04.217 } 00:15:04.217 }, 00:15:04.217 "base_bdevs_list": [ 00:15:04.217 { 00:15:04.217 "name": "spare", 00:15:04.217 "uuid": "a1deabde-97ed-54cd-a1aa-62736432f56d", 00:15:04.217 "is_configured": true, 00:15:04.217 "data_offset": 2048, 00:15:04.217 "data_size": 63488 00:15:04.217 }, 00:15:04.217 { 00:15:04.217 "name": "BaseBdev2", 00:15:04.217 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:04.217 "is_configured": true, 00:15:04.217 "data_offset": 2048, 00:15:04.217 "data_size": 63488 00:15:04.217 }, 00:15:04.217 { 00:15:04.217 "name": "BaseBdev3", 00:15:04.217 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:04.217 "is_configured": true, 00:15:04.217 "data_offset": 2048, 00:15:04.217 "data_size": 63488 00:15:04.217 } 00:15:04.217 ] 00:15:04.217 }' 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.217 [2024-11-26 13:27:52.571427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.217 [2024-11-26 13:27:52.613533] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:04.217 [2024-11-26 13:27:52.613593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.217 [2024-11-26 13:27:52.613616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.217 [2024-11-26 13:27:52.613625] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.217 "name": "raid_bdev1", 00:15:04.217 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:04.217 "strip_size_kb": 64, 00:15:04.217 "state": "online", 00:15:04.217 "raid_level": "raid5f", 00:15:04.217 "superblock": true, 00:15:04.217 "num_base_bdevs": 3, 00:15:04.217 "num_base_bdevs_discovered": 2, 00:15:04.217 "num_base_bdevs_operational": 2, 00:15:04.217 "base_bdevs_list": [ 00:15:04.217 { 00:15:04.217 "name": null, 00:15:04.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.217 "is_configured": false, 00:15:04.217 "data_offset": 0, 00:15:04.217 "data_size": 63488 00:15:04.217 }, 00:15:04.217 { 00:15:04.217 "name": "BaseBdev2", 00:15:04.217 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:04.217 "is_configured": true, 00:15:04.217 "data_offset": 2048, 00:15:04.217 "data_size": 63488 00:15:04.217 }, 00:15:04.217 { 00:15:04.217 "name": "BaseBdev3", 00:15:04.217 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:04.217 "is_configured": true, 00:15:04.217 "data_offset": 2048, 00:15:04.217 "data_size": 63488 00:15:04.217 } 00:15:04.217 ] 00:15:04.217 }' 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.217 13:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.785 "name": "raid_bdev1", 00:15:04.785 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:04.785 "strip_size_kb": 64, 00:15:04.785 "state": "online", 00:15:04.785 "raid_level": "raid5f", 00:15:04.785 "superblock": true, 00:15:04.785 "num_base_bdevs": 3, 00:15:04.785 "num_base_bdevs_discovered": 2, 00:15:04.785 "num_base_bdevs_operational": 2, 00:15:04.785 "base_bdevs_list": [ 00:15:04.785 { 00:15:04.785 "name": null, 00:15:04.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.785 "is_configured": false, 00:15:04.785 "data_offset": 0, 00:15:04.785 "data_size": 63488 00:15:04.785 }, 00:15:04.785 { 00:15:04.785 "name": "BaseBdev2", 00:15:04.785 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:04.785 "is_configured": true, 00:15:04.785 "data_offset": 2048, 00:15:04.785 "data_size": 63488 00:15:04.785 }, 00:15:04.785 { 00:15:04.785 "name": "BaseBdev3", 00:15:04.785 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:04.785 "is_configured": true, 00:15:04.785 "data_offset": 2048, 00:15:04.785 "data_size": 63488 00:15:04.785 } 00:15:04.785 ] 00:15:04.785 }' 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.785 [2024-11-26 13:27:53.297900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:04.785 [2024-11-26 13:27:53.308429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.785 13:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:04.785 [2024-11-26 13:27:53.314676] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.164 "name": "raid_bdev1", 00:15:06.164 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:06.164 "strip_size_kb": 64, 00:15:06.164 "state": "online", 00:15:06.164 "raid_level": "raid5f", 00:15:06.164 "superblock": true, 00:15:06.164 "num_base_bdevs": 3, 00:15:06.164 "num_base_bdevs_discovered": 3, 00:15:06.164 "num_base_bdevs_operational": 3, 00:15:06.164 "process": { 00:15:06.164 "type": "rebuild", 00:15:06.164 "target": "spare", 00:15:06.164 "progress": { 00:15:06.164 "blocks": 18432, 00:15:06.164 "percent": 14 00:15:06.164 } 00:15:06.164 }, 00:15:06.164 "base_bdevs_list": [ 00:15:06.164 { 00:15:06.164 "name": "spare", 00:15:06.164 "uuid": "a1deabde-97ed-54cd-a1aa-62736432f56d", 00:15:06.164 "is_configured": true, 00:15:06.164 "data_offset": 2048, 00:15:06.164 "data_size": 63488 00:15:06.164 }, 00:15:06.164 { 00:15:06.164 "name": "BaseBdev2", 00:15:06.164 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:06.164 "is_configured": true, 00:15:06.164 "data_offset": 2048, 00:15:06.164 "data_size": 63488 00:15:06.164 }, 00:15:06.164 { 00:15:06.164 "name": "BaseBdev3", 00:15:06.164 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:06.164 "is_configured": true, 00:15:06.164 "data_offset": 2048, 00:15:06.164 "data_size": 63488 00:15:06.164 } 00:15:06.164 ] 00:15:06.164 }' 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:06.164 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=576 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.164 "name": "raid_bdev1", 00:15:06.164 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:06.164 "strip_size_kb": 64, 00:15:06.164 "state": "online", 00:15:06.164 "raid_level": "raid5f", 00:15:06.164 "superblock": true, 00:15:06.164 "num_base_bdevs": 3, 00:15:06.164 "num_base_bdevs_discovered": 3, 00:15:06.164 "num_base_bdevs_operational": 3, 00:15:06.164 "process": { 00:15:06.164 "type": "rebuild", 00:15:06.164 "target": "spare", 00:15:06.164 "progress": { 00:15:06.164 "blocks": 22528, 00:15:06.164 "percent": 17 00:15:06.164 } 00:15:06.164 }, 00:15:06.164 "base_bdevs_list": [ 00:15:06.164 { 00:15:06.164 "name": "spare", 00:15:06.164 "uuid": "a1deabde-97ed-54cd-a1aa-62736432f56d", 00:15:06.164 "is_configured": true, 00:15:06.164 "data_offset": 2048, 00:15:06.164 "data_size": 63488 00:15:06.164 }, 00:15:06.164 { 00:15:06.164 "name": "BaseBdev2", 00:15:06.164 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:06.164 "is_configured": true, 00:15:06.164 "data_offset": 2048, 00:15:06.164 "data_size": 63488 00:15:06.164 }, 00:15:06.164 { 00:15:06.164 "name": "BaseBdev3", 00:15:06.164 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:06.164 "is_configured": true, 00:15:06.164 "data_offset": 2048, 00:15:06.164 "data_size": 63488 00:15:06.164 } 00:15:06.164 ] 00:15:06.164 }' 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.164 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.165 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.165 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:07.101 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.101 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.101 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.101 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.101 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.101 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.101 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.101 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.101 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.101 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.101 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.360 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.360 "name": "raid_bdev1", 00:15:07.360 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:07.360 "strip_size_kb": 64, 00:15:07.360 "state": "online", 00:15:07.360 "raid_level": "raid5f", 00:15:07.360 "superblock": true, 00:15:07.360 "num_base_bdevs": 3, 00:15:07.360 "num_base_bdevs_discovered": 3, 00:15:07.360 "num_base_bdevs_operational": 3, 00:15:07.360 "process": { 00:15:07.360 "type": "rebuild", 00:15:07.360 "target": "spare", 00:15:07.360 "progress": { 00:15:07.360 "blocks": 47104, 00:15:07.360 "percent": 37 00:15:07.360 } 00:15:07.360 }, 00:15:07.360 "base_bdevs_list": [ 00:15:07.360 { 00:15:07.360 "name": "spare", 00:15:07.360 "uuid": "a1deabde-97ed-54cd-a1aa-62736432f56d", 00:15:07.360 "is_configured": true, 00:15:07.360 "data_offset": 2048, 00:15:07.360 "data_size": 63488 00:15:07.360 }, 00:15:07.360 { 00:15:07.360 "name": "BaseBdev2", 00:15:07.360 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:07.360 "is_configured": true, 00:15:07.360 "data_offset": 2048, 00:15:07.360 "data_size": 63488 00:15:07.360 }, 00:15:07.360 { 00:15:07.360 "name": "BaseBdev3", 00:15:07.360 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:07.360 "is_configured": true, 00:15:07.360 "data_offset": 2048, 00:15:07.360 "data_size": 63488 00:15:07.360 } 00:15:07.360 ] 00:15:07.360 }' 00:15:07.360 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.360 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.360 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.360 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.360 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.300 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.300 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.300 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.300 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.300 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.300 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.300 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.300 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.300 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.300 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.300 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.300 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.300 "name": "raid_bdev1", 00:15:08.300 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:08.300 "strip_size_kb": 64, 00:15:08.300 "state": "online", 00:15:08.300 "raid_level": "raid5f", 00:15:08.300 "superblock": true, 00:15:08.300 "num_base_bdevs": 3, 00:15:08.300 "num_base_bdevs_discovered": 3, 00:15:08.300 "num_base_bdevs_operational": 3, 00:15:08.300 "process": { 00:15:08.300 "type": "rebuild", 00:15:08.300 "target": "spare", 00:15:08.300 "progress": { 00:15:08.300 "blocks": 69632, 00:15:08.300 "percent": 54 00:15:08.300 } 00:15:08.300 }, 00:15:08.300 "base_bdevs_list": [ 00:15:08.300 { 00:15:08.300 "name": "spare", 00:15:08.300 "uuid": "a1deabde-97ed-54cd-a1aa-62736432f56d", 00:15:08.300 "is_configured": true, 00:15:08.300 "data_offset": 2048, 00:15:08.300 "data_size": 63488 00:15:08.300 }, 00:15:08.300 { 00:15:08.300 "name": "BaseBdev2", 00:15:08.300 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:08.300 "is_configured": true, 00:15:08.300 "data_offset": 2048, 00:15:08.300 "data_size": 63488 00:15:08.300 }, 00:15:08.300 { 00:15:08.300 "name": "BaseBdev3", 00:15:08.300 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:08.300 "is_configured": true, 00:15:08.300 "data_offset": 2048, 00:15:08.300 "data_size": 63488 00:15:08.300 } 00:15:08.300 ] 00:15:08.300 }' 00:15:08.300 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.576 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.577 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.577 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.577 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.525 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.525 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.525 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.525 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.525 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.525 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.525 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.525 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.525 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.525 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.525 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.525 13:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.525 "name": "raid_bdev1", 00:15:09.525 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:09.525 "strip_size_kb": 64, 00:15:09.525 "state": "online", 00:15:09.525 "raid_level": "raid5f", 00:15:09.525 "superblock": true, 00:15:09.525 "num_base_bdevs": 3, 00:15:09.525 "num_base_bdevs_discovered": 3, 00:15:09.525 "num_base_bdevs_operational": 3, 00:15:09.525 "process": { 00:15:09.525 "type": "rebuild", 00:15:09.525 "target": "spare", 00:15:09.525 "progress": { 00:15:09.525 "blocks": 94208, 00:15:09.525 "percent": 74 00:15:09.525 } 00:15:09.525 }, 00:15:09.525 "base_bdevs_list": [ 00:15:09.525 { 00:15:09.525 "name": "spare", 00:15:09.525 "uuid": "a1deabde-97ed-54cd-a1aa-62736432f56d", 00:15:09.525 "is_configured": true, 00:15:09.525 "data_offset": 2048, 00:15:09.525 "data_size": 63488 00:15:09.525 }, 00:15:09.525 { 00:15:09.525 "name": "BaseBdev2", 00:15:09.525 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:09.525 "is_configured": true, 00:15:09.525 "data_offset": 2048, 00:15:09.525 "data_size": 63488 00:15:09.525 }, 00:15:09.525 { 00:15:09.525 "name": "BaseBdev3", 00:15:09.525 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:09.525 "is_configured": true, 00:15:09.525 "data_offset": 2048, 00:15:09.525 "data_size": 63488 00:15:09.525 } 00:15:09.526 ] 00:15:09.526 }' 00:15:09.526 13:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.526 13:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.526 13:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.783 13:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.783 13:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.715 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.716 "name": "raid_bdev1", 00:15:10.716 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:10.716 "strip_size_kb": 64, 00:15:10.716 "state": "online", 00:15:10.716 "raid_level": "raid5f", 00:15:10.716 "superblock": true, 00:15:10.716 "num_base_bdevs": 3, 00:15:10.716 "num_base_bdevs_discovered": 3, 00:15:10.716 "num_base_bdevs_operational": 3, 00:15:10.716 "process": { 00:15:10.716 "type": "rebuild", 00:15:10.716 "target": "spare", 00:15:10.716 "progress": { 00:15:10.716 "blocks": 116736, 00:15:10.716 "percent": 91 00:15:10.716 } 00:15:10.716 }, 00:15:10.716 "base_bdevs_list": [ 00:15:10.716 { 00:15:10.716 "name": "spare", 00:15:10.716 "uuid": "a1deabde-97ed-54cd-a1aa-62736432f56d", 00:15:10.716 "is_configured": true, 00:15:10.716 "data_offset": 2048, 00:15:10.716 "data_size": 63488 00:15:10.716 }, 00:15:10.716 { 00:15:10.716 "name": "BaseBdev2", 00:15:10.716 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:10.716 "is_configured": true, 00:15:10.716 "data_offset": 2048, 00:15:10.716 "data_size": 63488 00:15:10.716 }, 00:15:10.716 { 00:15:10.716 "name": "BaseBdev3", 00:15:10.716 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:10.716 "is_configured": true, 00:15:10.716 "data_offset": 2048, 00:15:10.716 "data_size": 63488 00:15:10.716 } 00:15:10.716 ] 00:15:10.716 }' 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.716 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.280 [2024-11-26 13:27:59.567064] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:11.280 [2024-11-26 13:27:59.567142] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:11.280 [2024-11-26 13:27:59.567277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.845 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.845 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.845 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.845 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.845 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.845 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.845 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.845 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.845 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.845 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.845 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.845 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.845 "name": "raid_bdev1", 00:15:11.845 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:11.845 "strip_size_kb": 64, 00:15:11.845 "state": "online", 00:15:11.845 "raid_level": "raid5f", 00:15:11.845 "superblock": true, 00:15:11.845 "num_base_bdevs": 3, 00:15:11.845 "num_base_bdevs_discovered": 3, 00:15:11.845 "num_base_bdevs_operational": 3, 00:15:11.845 "base_bdevs_list": [ 00:15:11.845 { 00:15:11.845 "name": "spare", 00:15:11.845 "uuid": "a1deabde-97ed-54cd-a1aa-62736432f56d", 00:15:11.845 "is_configured": true, 00:15:11.845 "data_offset": 2048, 00:15:11.845 "data_size": 63488 00:15:11.845 }, 00:15:11.845 { 00:15:11.845 "name": "BaseBdev2", 00:15:11.845 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:11.845 "is_configured": true, 00:15:11.845 "data_offset": 2048, 00:15:11.845 "data_size": 63488 00:15:11.845 }, 00:15:11.845 { 00:15:11.845 "name": "BaseBdev3", 00:15:11.845 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:11.845 "is_configured": true, 00:15:11.845 "data_offset": 2048, 00:15:11.845 "data_size": 63488 00:15:11.845 } 00:15:11.845 ] 00:15:11.845 }' 00:15:11.845 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.845 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:11.845 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.103 "name": "raid_bdev1", 00:15:12.103 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:12.103 "strip_size_kb": 64, 00:15:12.103 "state": "online", 00:15:12.103 "raid_level": "raid5f", 00:15:12.103 "superblock": true, 00:15:12.103 "num_base_bdevs": 3, 00:15:12.103 "num_base_bdevs_discovered": 3, 00:15:12.103 "num_base_bdevs_operational": 3, 00:15:12.103 "base_bdevs_list": [ 00:15:12.103 { 00:15:12.103 "name": "spare", 00:15:12.103 "uuid": "a1deabde-97ed-54cd-a1aa-62736432f56d", 00:15:12.103 "is_configured": true, 00:15:12.103 "data_offset": 2048, 00:15:12.103 "data_size": 63488 00:15:12.103 }, 00:15:12.103 { 00:15:12.103 "name": "BaseBdev2", 00:15:12.103 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:12.103 "is_configured": true, 00:15:12.103 "data_offset": 2048, 00:15:12.103 "data_size": 63488 00:15:12.103 }, 00:15:12.103 { 00:15:12.103 "name": "BaseBdev3", 00:15:12.103 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:12.103 "is_configured": true, 00:15:12.103 "data_offset": 2048, 00:15:12.103 "data_size": 63488 00:15:12.103 } 00:15:12.103 ] 00:15:12.103 }' 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.103 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.104 "name": "raid_bdev1", 00:15:12.104 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:12.104 "strip_size_kb": 64, 00:15:12.104 "state": "online", 00:15:12.104 "raid_level": "raid5f", 00:15:12.104 "superblock": true, 00:15:12.104 "num_base_bdevs": 3, 00:15:12.104 "num_base_bdevs_discovered": 3, 00:15:12.104 "num_base_bdevs_operational": 3, 00:15:12.104 "base_bdevs_list": [ 00:15:12.104 { 00:15:12.104 "name": "spare", 00:15:12.104 "uuid": "a1deabde-97ed-54cd-a1aa-62736432f56d", 00:15:12.104 "is_configured": true, 00:15:12.104 "data_offset": 2048, 00:15:12.104 "data_size": 63488 00:15:12.104 }, 00:15:12.104 { 00:15:12.104 "name": "BaseBdev2", 00:15:12.104 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:12.104 "is_configured": true, 00:15:12.104 "data_offset": 2048, 00:15:12.104 "data_size": 63488 00:15:12.104 }, 00:15:12.104 { 00:15:12.104 "name": "BaseBdev3", 00:15:12.104 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:12.104 "is_configured": true, 00:15:12.104 "data_offset": 2048, 00:15:12.104 "data_size": 63488 00:15:12.104 } 00:15:12.104 ] 00:15:12.104 }' 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.104 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.672 [2024-11-26 13:28:01.095871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.672 [2024-11-26 13:28:01.095899] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.672 [2024-11-26 13:28:01.095973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.672 [2024-11-26 13:28:01.096046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.672 [2024-11-26 13:28:01.096066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.672 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:12.931 /dev/nbd0 00:15:12.931 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:12.931 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:12.931 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:12.931 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:12.931 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:12.931 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:12.931 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:12.931 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:12.931 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:12.931 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:12.931 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.931 1+0 records in 00:15:12.931 1+0 records out 00:15:12.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307293 s, 13.3 MB/s 00:15:12.931 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.190 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:13.190 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.190 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:13.190 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:13.190 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.190 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:13.190 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:13.449 /dev/nbd1 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.449 1+0 records in 00:15:13.449 1+0 records out 00:15:13.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335974 s, 12.2 MB/s 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.449 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.016 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.275 [2024-11-26 13:28:02.583179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:14.275 [2024-11-26 13:28:02.583284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.275 [2024-11-26 13:28:02.583312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:14.275 [2024-11-26 13:28:02.583329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.275 [2024-11-26 13:28:02.585877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.275 [2024-11-26 13:28:02.585919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:14.275 [2024-11-26 13:28:02.586000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:14.275 [2024-11-26 13:28:02.586065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.275 [2024-11-26 13:28:02.586228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.275 [2024-11-26 13:28:02.586403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.275 spare 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.275 [2024-11-26 13:28:02.686527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:14.275 [2024-11-26 13:28:02.686558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:14.275 [2024-11-26 13:28:02.686833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:14.275 [2024-11-26 13:28:02.690734] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:14.275 [2024-11-26 13:28:02.690759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:14.275 [2024-11-26 13:28:02.690991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.275 "name": "raid_bdev1", 00:15:14.275 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:14.275 "strip_size_kb": 64, 00:15:14.275 "state": "online", 00:15:14.275 "raid_level": "raid5f", 00:15:14.275 "superblock": true, 00:15:14.275 "num_base_bdevs": 3, 00:15:14.275 "num_base_bdevs_discovered": 3, 00:15:14.275 "num_base_bdevs_operational": 3, 00:15:14.275 "base_bdevs_list": [ 00:15:14.275 { 00:15:14.275 "name": "spare", 00:15:14.275 "uuid": "a1deabde-97ed-54cd-a1aa-62736432f56d", 00:15:14.275 "is_configured": true, 00:15:14.275 "data_offset": 2048, 00:15:14.275 "data_size": 63488 00:15:14.275 }, 00:15:14.275 { 00:15:14.275 "name": "BaseBdev2", 00:15:14.275 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:14.275 "is_configured": true, 00:15:14.275 "data_offset": 2048, 00:15:14.275 "data_size": 63488 00:15:14.275 }, 00:15:14.275 { 00:15:14.275 "name": "BaseBdev3", 00:15:14.275 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:14.275 "is_configured": true, 00:15:14.275 "data_offset": 2048, 00:15:14.275 "data_size": 63488 00:15:14.275 } 00:15:14.275 ] 00:15:14.275 }' 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.275 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.843 "name": "raid_bdev1", 00:15:14.843 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:14.843 "strip_size_kb": 64, 00:15:14.843 "state": "online", 00:15:14.843 "raid_level": "raid5f", 00:15:14.843 "superblock": true, 00:15:14.843 "num_base_bdevs": 3, 00:15:14.843 "num_base_bdevs_discovered": 3, 00:15:14.843 "num_base_bdevs_operational": 3, 00:15:14.843 "base_bdevs_list": [ 00:15:14.843 { 00:15:14.843 "name": "spare", 00:15:14.843 "uuid": "a1deabde-97ed-54cd-a1aa-62736432f56d", 00:15:14.843 "is_configured": true, 00:15:14.843 "data_offset": 2048, 00:15:14.843 "data_size": 63488 00:15:14.843 }, 00:15:14.843 { 00:15:14.843 "name": "BaseBdev2", 00:15:14.843 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:14.843 "is_configured": true, 00:15:14.843 "data_offset": 2048, 00:15:14.843 "data_size": 63488 00:15:14.843 }, 00:15:14.843 { 00:15:14.843 "name": "BaseBdev3", 00:15:14.843 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:14.843 "is_configured": true, 00:15:14.843 "data_offset": 2048, 00:15:14.843 "data_size": 63488 00:15:14.843 } 00:15:14.843 ] 00:15:14.843 }' 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.843 [2024-11-26 13:28:03.399875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.843 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.103 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.103 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.103 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.103 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.103 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.103 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.103 "name": "raid_bdev1", 00:15:15.103 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:15.103 "strip_size_kb": 64, 00:15:15.103 "state": "online", 00:15:15.103 "raid_level": "raid5f", 00:15:15.103 "superblock": true, 00:15:15.103 "num_base_bdevs": 3, 00:15:15.103 "num_base_bdevs_discovered": 2, 00:15:15.103 "num_base_bdevs_operational": 2, 00:15:15.103 "base_bdevs_list": [ 00:15:15.103 { 00:15:15.103 "name": null, 00:15:15.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.103 "is_configured": false, 00:15:15.103 "data_offset": 0, 00:15:15.103 "data_size": 63488 00:15:15.103 }, 00:15:15.103 { 00:15:15.103 "name": "BaseBdev2", 00:15:15.103 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:15.103 "is_configured": true, 00:15:15.103 "data_offset": 2048, 00:15:15.103 "data_size": 63488 00:15:15.103 }, 00:15:15.103 { 00:15:15.103 "name": "BaseBdev3", 00:15:15.103 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:15.103 "is_configured": true, 00:15:15.103 "data_offset": 2048, 00:15:15.103 "data_size": 63488 00:15:15.103 } 00:15:15.103 ] 00:15:15.103 }' 00:15:15.103 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.103 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.362 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:15.362 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.362 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.362 [2024-11-26 13:28:03.891986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.362 [2024-11-26 13:28:03.892113] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:15.362 [2024-11-26 13:28:03.892135] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:15.362 [2024-11-26 13:28:03.892169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.362 [2024-11-26 13:28:03.903604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:15.362 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.362 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:15.362 [2024-11-26 13:28:03.909361] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:16.741 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.741 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.741 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.741 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.741 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.741 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.741 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.741 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.741 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.741 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.741 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.741 "name": "raid_bdev1", 00:15:16.741 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:16.741 "strip_size_kb": 64, 00:15:16.741 "state": "online", 00:15:16.741 "raid_level": "raid5f", 00:15:16.741 "superblock": true, 00:15:16.741 "num_base_bdevs": 3, 00:15:16.741 "num_base_bdevs_discovered": 3, 00:15:16.741 "num_base_bdevs_operational": 3, 00:15:16.741 "process": { 00:15:16.741 "type": "rebuild", 00:15:16.741 "target": "spare", 00:15:16.741 "progress": { 00:15:16.741 "blocks": 20480, 00:15:16.741 "percent": 16 00:15:16.741 } 00:15:16.741 }, 00:15:16.741 "base_bdevs_list": [ 00:15:16.741 { 00:15:16.741 "name": "spare", 00:15:16.741 "uuid": "a1deabde-97ed-54cd-a1aa-62736432f56d", 00:15:16.741 "is_configured": true, 00:15:16.741 "data_offset": 2048, 00:15:16.741 "data_size": 63488 00:15:16.741 }, 00:15:16.741 { 00:15:16.741 "name": "BaseBdev2", 00:15:16.741 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:16.741 "is_configured": true, 00:15:16.741 "data_offset": 2048, 00:15:16.741 "data_size": 63488 00:15:16.741 }, 00:15:16.741 { 00:15:16.741 "name": "BaseBdev3", 00:15:16.741 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:16.741 "is_configured": true, 00:15:16.741 "data_offset": 2048, 00:15:16.741 "data_size": 63488 00:15:16.741 } 00:15:16.741 ] 00:15:16.741 }' 00:15:16.741 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.741 [2024-11-26 13:28:05.042657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.741 [2024-11-26 13:28:05.119952] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:16.741 [2024-11-26 13:28:05.120013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.741 [2024-11-26 13:28:05.120032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.741 [2024-11-26 13:28:05.120044] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.741 "name": "raid_bdev1", 00:15:16.741 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:16.741 "strip_size_kb": 64, 00:15:16.741 "state": "online", 00:15:16.741 "raid_level": "raid5f", 00:15:16.741 "superblock": true, 00:15:16.741 "num_base_bdevs": 3, 00:15:16.741 "num_base_bdevs_discovered": 2, 00:15:16.741 "num_base_bdevs_operational": 2, 00:15:16.741 "base_bdevs_list": [ 00:15:16.741 { 00:15:16.741 "name": null, 00:15:16.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.741 "is_configured": false, 00:15:16.741 "data_offset": 0, 00:15:16.741 "data_size": 63488 00:15:16.741 }, 00:15:16.741 { 00:15:16.741 "name": "BaseBdev2", 00:15:16.741 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:16.741 "is_configured": true, 00:15:16.741 "data_offset": 2048, 00:15:16.741 "data_size": 63488 00:15:16.741 }, 00:15:16.741 { 00:15:16.741 "name": "BaseBdev3", 00:15:16.741 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:16.741 "is_configured": true, 00:15:16.741 "data_offset": 2048, 00:15:16.741 "data_size": 63488 00:15:16.741 } 00:15:16.741 ] 00:15:16.741 }' 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.741 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.309 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:17.309 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.309 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.309 [2024-11-26 13:28:05.664441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:17.309 [2024-11-26 13:28:05.664675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.309 [2024-11-26 13:28:05.664709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:17.309 [2024-11-26 13:28:05.664729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.309 [2024-11-26 13:28:05.665238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.309 [2024-11-26 13:28:05.665285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:17.309 [2024-11-26 13:28:05.665374] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:17.309 [2024-11-26 13:28:05.665400] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:17.309 [2024-11-26 13:28:05.665411] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:17.309 [2024-11-26 13:28:05.665437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:17.309 [2024-11-26 13:28:05.675422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:17.309 spare 00:15:17.309 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.309 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:17.309 [2024-11-26 13:28:05.681151] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:18.244 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.244 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.244 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.244 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.244 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.244 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.244 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.244 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.244 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.244 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.244 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.244 "name": "raid_bdev1", 00:15:18.244 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:18.244 "strip_size_kb": 64, 00:15:18.244 "state": "online", 00:15:18.244 "raid_level": "raid5f", 00:15:18.244 "superblock": true, 00:15:18.244 "num_base_bdevs": 3, 00:15:18.244 "num_base_bdevs_discovered": 3, 00:15:18.244 "num_base_bdevs_operational": 3, 00:15:18.244 "process": { 00:15:18.244 "type": "rebuild", 00:15:18.244 "target": "spare", 00:15:18.244 "progress": { 00:15:18.244 "blocks": 20480, 00:15:18.244 "percent": 16 00:15:18.244 } 00:15:18.244 }, 00:15:18.244 "base_bdevs_list": [ 00:15:18.244 { 00:15:18.244 "name": "spare", 00:15:18.244 "uuid": "a1deabde-97ed-54cd-a1aa-62736432f56d", 00:15:18.244 "is_configured": true, 00:15:18.244 "data_offset": 2048, 00:15:18.244 "data_size": 63488 00:15:18.244 }, 00:15:18.244 { 00:15:18.244 "name": "BaseBdev2", 00:15:18.244 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:18.244 "is_configured": true, 00:15:18.244 "data_offset": 2048, 00:15:18.244 "data_size": 63488 00:15:18.244 }, 00:15:18.244 { 00:15:18.244 "name": "BaseBdev3", 00:15:18.244 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:18.244 "is_configured": true, 00:15:18.244 "data_offset": 2048, 00:15:18.244 "data_size": 63488 00:15:18.244 } 00:15:18.244 ] 00:15:18.244 }' 00:15:18.244 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.244 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.244 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.503 [2024-11-26 13:28:06.834460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.503 [2024-11-26 13:28:06.891748] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:18.503 [2024-11-26 13:28:06.891804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.503 [2024-11-26 13:28:06.891828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.503 [2024-11-26 13:28:06.891837] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.503 "name": "raid_bdev1", 00:15:18.503 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:18.503 "strip_size_kb": 64, 00:15:18.503 "state": "online", 00:15:18.503 "raid_level": "raid5f", 00:15:18.503 "superblock": true, 00:15:18.503 "num_base_bdevs": 3, 00:15:18.503 "num_base_bdevs_discovered": 2, 00:15:18.503 "num_base_bdevs_operational": 2, 00:15:18.503 "base_bdevs_list": [ 00:15:18.503 { 00:15:18.503 "name": null, 00:15:18.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.503 "is_configured": false, 00:15:18.503 "data_offset": 0, 00:15:18.503 "data_size": 63488 00:15:18.503 }, 00:15:18.503 { 00:15:18.503 "name": "BaseBdev2", 00:15:18.503 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:18.503 "is_configured": true, 00:15:18.503 "data_offset": 2048, 00:15:18.503 "data_size": 63488 00:15:18.503 }, 00:15:18.503 { 00:15:18.503 "name": "BaseBdev3", 00:15:18.503 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:18.503 "is_configured": true, 00:15:18.503 "data_offset": 2048, 00:15:18.503 "data_size": 63488 00:15:18.503 } 00:15:18.503 ] 00:15:18.503 }' 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.503 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.071 "name": "raid_bdev1", 00:15:19.071 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:19.071 "strip_size_kb": 64, 00:15:19.071 "state": "online", 00:15:19.071 "raid_level": "raid5f", 00:15:19.071 "superblock": true, 00:15:19.071 "num_base_bdevs": 3, 00:15:19.071 "num_base_bdevs_discovered": 2, 00:15:19.071 "num_base_bdevs_operational": 2, 00:15:19.071 "base_bdevs_list": [ 00:15:19.071 { 00:15:19.071 "name": null, 00:15:19.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.071 "is_configured": false, 00:15:19.071 "data_offset": 0, 00:15:19.071 "data_size": 63488 00:15:19.071 }, 00:15:19.071 { 00:15:19.071 "name": "BaseBdev2", 00:15:19.071 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:19.071 "is_configured": true, 00:15:19.071 "data_offset": 2048, 00:15:19.071 "data_size": 63488 00:15:19.071 }, 00:15:19.071 { 00:15:19.071 "name": "BaseBdev3", 00:15:19.071 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:19.071 "is_configured": true, 00:15:19.071 "data_offset": 2048, 00:15:19.071 "data_size": 63488 00:15:19.071 } 00:15:19.071 ] 00:15:19.071 }' 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.071 [2024-11-26 13:28:07.591387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:19.071 [2024-11-26 13:28:07.591437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.071 [2024-11-26 13:28:07.591467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:19.071 [2024-11-26 13:28:07.591480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.071 [2024-11-26 13:28:07.591926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.071 [2024-11-26 13:28:07.591954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:19.071 [2024-11-26 13:28:07.592038] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:19.071 [2024-11-26 13:28:07.592058] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:19.071 [2024-11-26 13:28:07.592079] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:19.071 [2024-11-26 13:28:07.592089] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:19.071 BaseBdev1 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.071 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.449 "name": "raid_bdev1", 00:15:20.449 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:20.449 "strip_size_kb": 64, 00:15:20.449 "state": "online", 00:15:20.449 "raid_level": "raid5f", 00:15:20.449 "superblock": true, 00:15:20.449 "num_base_bdevs": 3, 00:15:20.449 "num_base_bdevs_discovered": 2, 00:15:20.449 "num_base_bdevs_operational": 2, 00:15:20.449 "base_bdevs_list": [ 00:15:20.449 { 00:15:20.449 "name": null, 00:15:20.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.449 "is_configured": false, 00:15:20.449 "data_offset": 0, 00:15:20.449 "data_size": 63488 00:15:20.449 }, 00:15:20.449 { 00:15:20.449 "name": "BaseBdev2", 00:15:20.449 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:20.449 "is_configured": true, 00:15:20.449 "data_offset": 2048, 00:15:20.449 "data_size": 63488 00:15:20.449 }, 00:15:20.449 { 00:15:20.449 "name": "BaseBdev3", 00:15:20.449 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:20.449 "is_configured": true, 00:15:20.449 "data_offset": 2048, 00:15:20.449 "data_size": 63488 00:15:20.449 } 00:15:20.449 ] 00:15:20.449 }' 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.449 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.708 "name": "raid_bdev1", 00:15:20.708 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:20.708 "strip_size_kb": 64, 00:15:20.708 "state": "online", 00:15:20.708 "raid_level": "raid5f", 00:15:20.708 "superblock": true, 00:15:20.708 "num_base_bdevs": 3, 00:15:20.708 "num_base_bdevs_discovered": 2, 00:15:20.708 "num_base_bdevs_operational": 2, 00:15:20.708 "base_bdevs_list": [ 00:15:20.708 { 00:15:20.708 "name": null, 00:15:20.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.708 "is_configured": false, 00:15:20.708 "data_offset": 0, 00:15:20.708 "data_size": 63488 00:15:20.708 }, 00:15:20.708 { 00:15:20.708 "name": "BaseBdev2", 00:15:20.708 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:20.708 "is_configured": true, 00:15:20.708 "data_offset": 2048, 00:15:20.708 "data_size": 63488 00:15:20.708 }, 00:15:20.708 { 00:15:20.708 "name": "BaseBdev3", 00:15:20.708 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:20.708 "is_configured": true, 00:15:20.708 "data_offset": 2048, 00:15:20.708 "data_size": 63488 00:15:20.708 } 00:15:20.708 ] 00:15:20.708 }' 00:15:20.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.966 [2024-11-26 13:28:09.283814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.966 [2024-11-26 13:28:09.283928] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:20.966 [2024-11-26 13:28:09.283948] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:20.966 request: 00:15:20.966 { 00:15:20.966 "base_bdev": "BaseBdev1", 00:15:20.966 "raid_bdev": "raid_bdev1", 00:15:20.966 "method": "bdev_raid_add_base_bdev", 00:15:20.966 "req_id": 1 00:15:20.966 } 00:15:20.966 Got JSON-RPC error response 00:15:20.966 response: 00:15:20.966 { 00:15:20.966 "code": -22, 00:15:20.966 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:20.966 } 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.901 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.901 "name": "raid_bdev1", 00:15:21.901 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:21.901 "strip_size_kb": 64, 00:15:21.901 "state": "online", 00:15:21.901 "raid_level": "raid5f", 00:15:21.901 "superblock": true, 00:15:21.901 "num_base_bdevs": 3, 00:15:21.901 "num_base_bdevs_discovered": 2, 00:15:21.901 "num_base_bdevs_operational": 2, 00:15:21.901 "base_bdevs_list": [ 00:15:21.901 { 00:15:21.901 "name": null, 00:15:21.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.901 "is_configured": false, 00:15:21.901 "data_offset": 0, 00:15:21.901 "data_size": 63488 00:15:21.901 }, 00:15:21.901 { 00:15:21.901 "name": "BaseBdev2", 00:15:21.901 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:21.901 "is_configured": true, 00:15:21.901 "data_offset": 2048, 00:15:21.901 "data_size": 63488 00:15:21.901 }, 00:15:21.901 { 00:15:21.901 "name": "BaseBdev3", 00:15:21.901 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:21.901 "is_configured": true, 00:15:21.901 "data_offset": 2048, 00:15:21.901 "data_size": 63488 00:15:21.901 } 00:15:21.901 ] 00:15:21.902 }' 00:15:21.902 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.902 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.470 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.471 "name": "raid_bdev1", 00:15:22.471 "uuid": "e14fa67e-7877-4d53-8444-74d1e0710471", 00:15:22.471 "strip_size_kb": 64, 00:15:22.471 "state": "online", 00:15:22.471 "raid_level": "raid5f", 00:15:22.471 "superblock": true, 00:15:22.471 "num_base_bdevs": 3, 00:15:22.471 "num_base_bdevs_discovered": 2, 00:15:22.471 "num_base_bdevs_operational": 2, 00:15:22.471 "base_bdevs_list": [ 00:15:22.471 { 00:15:22.471 "name": null, 00:15:22.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.471 "is_configured": false, 00:15:22.471 "data_offset": 0, 00:15:22.471 "data_size": 63488 00:15:22.471 }, 00:15:22.471 { 00:15:22.471 "name": "BaseBdev2", 00:15:22.471 "uuid": "49ce51dd-fbc7-5a2a-8b67-0f587166ce1f", 00:15:22.471 "is_configured": true, 00:15:22.471 "data_offset": 2048, 00:15:22.471 "data_size": 63488 00:15:22.471 }, 00:15:22.471 { 00:15:22.471 "name": "BaseBdev3", 00:15:22.471 "uuid": "a3c39240-c94c-5841-ac37-59b87625fccb", 00:15:22.471 "is_configured": true, 00:15:22.471 "data_offset": 2048, 00:15:22.471 "data_size": 63488 00:15:22.471 } 00:15:22.471 ] 00:15:22.471 }' 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81647 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81647 ']' 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81647 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81647 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.471 killing process with pid 81647 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81647' 00:15:22.471 Received shutdown signal, test time was about 60.000000 seconds 00:15:22.471 00:15:22.471 Latency(us) 00:15:22.471 [2024-11-26T13:28:11.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.471 [2024-11-26T13:28:11.041Z] =================================================================================================================== 00:15:22.471 [2024-11-26T13:28:11.041Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81647 00:15:22.471 [2024-11-26 13:28:10.987493] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.471 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81647 00:15:22.471 [2024-11-26 13:28:10.987605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.471 [2024-11-26 13:28:10.987667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.471 [2024-11-26 13:28:10.987684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:22.730 [2024-11-26 13:28:11.254050] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.667 13:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:23.667 00:15:23.667 real 0m24.028s 00:15:23.667 user 0m32.006s 00:15:23.667 sys 0m2.511s 00:15:23.667 13:28:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.667 13:28:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.667 ************************************ 00:15:23.667 END TEST raid5f_rebuild_test_sb 00:15:23.667 ************************************ 00:15:23.667 13:28:12 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:23.667 13:28:12 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:23.667 13:28:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:23.667 13:28:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.667 13:28:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:23.667 ************************************ 00:15:23.667 START TEST raid5f_state_function_test 00:15:23.667 ************************************ 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82410 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:23.667 Process raid pid: 82410 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82410' 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82410 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82410 ']' 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.667 13:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.667 [2024-11-26 13:28:12.231264] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:15:23.667 [2024-11-26 13:28:12.231392] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.926 [2024-11-26 13:28:12.394850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.185 [2024-11-26 13:28:12.498519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.185 [2024-11-26 13:28:12.672484] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.185 [2024-11-26 13:28:12.672521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.752 [2024-11-26 13:28:13.205769] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.752 [2024-11-26 13:28:13.205823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.752 [2024-11-26 13:28:13.205838] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.752 [2024-11-26 13:28:13.205852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.752 [2024-11-26 13:28:13.205861] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.752 [2024-11-26 13:28:13.205872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.752 [2024-11-26 13:28:13.205880] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:24.752 [2024-11-26 13:28:13.205892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.752 "name": "Existed_Raid", 00:15:24.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.752 "strip_size_kb": 64, 00:15:24.752 "state": "configuring", 00:15:24.752 "raid_level": "raid5f", 00:15:24.752 "superblock": false, 00:15:24.752 "num_base_bdevs": 4, 00:15:24.752 "num_base_bdevs_discovered": 0, 00:15:24.752 "num_base_bdevs_operational": 4, 00:15:24.752 "base_bdevs_list": [ 00:15:24.752 { 00:15:24.752 "name": "BaseBdev1", 00:15:24.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.752 "is_configured": false, 00:15:24.752 "data_offset": 0, 00:15:24.752 "data_size": 0 00:15:24.752 }, 00:15:24.752 { 00:15:24.752 "name": "BaseBdev2", 00:15:24.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.752 "is_configured": false, 00:15:24.752 "data_offset": 0, 00:15:24.752 "data_size": 0 00:15:24.752 }, 00:15:24.752 { 00:15:24.752 "name": "BaseBdev3", 00:15:24.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.752 "is_configured": false, 00:15:24.752 "data_offset": 0, 00:15:24.752 "data_size": 0 00:15:24.752 }, 00:15:24.752 { 00:15:24.752 "name": "BaseBdev4", 00:15:24.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.752 "is_configured": false, 00:15:24.752 "data_offset": 0, 00:15:24.752 "data_size": 0 00:15:24.752 } 00:15:24.752 ] 00:15:24.752 }' 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.752 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.319 [2024-11-26 13:28:13.709812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.319 [2024-11-26 13:28:13.709846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.319 [2024-11-26 13:28:13.717817] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.319 [2024-11-26 13:28:13.717858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.319 [2024-11-26 13:28:13.717870] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.319 [2024-11-26 13:28:13.717883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.319 [2024-11-26 13:28:13.717891] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.319 [2024-11-26 13:28:13.717902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.319 [2024-11-26 13:28:13.717910] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:25.319 [2024-11-26 13:28:13.717921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.319 [2024-11-26 13:28:13.756281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.319 BaseBdev1 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.319 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.319 [ 00:15:25.319 { 00:15:25.319 "name": "BaseBdev1", 00:15:25.319 "aliases": [ 00:15:25.320 "ebb723d7-8055-46df-bbf5-6dd7ec32dda5" 00:15:25.320 ], 00:15:25.320 "product_name": "Malloc disk", 00:15:25.320 "block_size": 512, 00:15:25.320 "num_blocks": 65536, 00:15:25.320 "uuid": "ebb723d7-8055-46df-bbf5-6dd7ec32dda5", 00:15:25.320 "assigned_rate_limits": { 00:15:25.320 "rw_ios_per_sec": 0, 00:15:25.320 "rw_mbytes_per_sec": 0, 00:15:25.320 "r_mbytes_per_sec": 0, 00:15:25.320 "w_mbytes_per_sec": 0 00:15:25.320 }, 00:15:25.320 "claimed": true, 00:15:25.320 "claim_type": "exclusive_write", 00:15:25.320 "zoned": false, 00:15:25.320 "supported_io_types": { 00:15:25.320 "read": true, 00:15:25.320 "write": true, 00:15:25.320 "unmap": true, 00:15:25.320 "flush": true, 00:15:25.320 "reset": true, 00:15:25.320 "nvme_admin": false, 00:15:25.320 "nvme_io": false, 00:15:25.320 "nvme_io_md": false, 00:15:25.320 "write_zeroes": true, 00:15:25.320 "zcopy": true, 00:15:25.320 "get_zone_info": false, 00:15:25.320 "zone_management": false, 00:15:25.320 "zone_append": false, 00:15:25.320 "compare": false, 00:15:25.320 "compare_and_write": false, 00:15:25.320 "abort": true, 00:15:25.320 "seek_hole": false, 00:15:25.320 "seek_data": false, 00:15:25.320 "copy": true, 00:15:25.320 "nvme_iov_md": false 00:15:25.320 }, 00:15:25.320 "memory_domains": [ 00:15:25.320 { 00:15:25.320 "dma_device_id": "system", 00:15:25.320 "dma_device_type": 1 00:15:25.320 }, 00:15:25.320 { 00:15:25.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.320 "dma_device_type": 2 00:15:25.320 } 00:15:25.320 ], 00:15:25.320 "driver_specific": {} 00:15:25.320 } 00:15:25.320 ] 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.320 "name": "Existed_Raid", 00:15:25.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.320 "strip_size_kb": 64, 00:15:25.320 "state": "configuring", 00:15:25.320 "raid_level": "raid5f", 00:15:25.320 "superblock": false, 00:15:25.320 "num_base_bdevs": 4, 00:15:25.320 "num_base_bdevs_discovered": 1, 00:15:25.320 "num_base_bdevs_operational": 4, 00:15:25.320 "base_bdevs_list": [ 00:15:25.320 { 00:15:25.320 "name": "BaseBdev1", 00:15:25.320 "uuid": "ebb723d7-8055-46df-bbf5-6dd7ec32dda5", 00:15:25.320 "is_configured": true, 00:15:25.320 "data_offset": 0, 00:15:25.320 "data_size": 65536 00:15:25.320 }, 00:15:25.320 { 00:15:25.320 "name": "BaseBdev2", 00:15:25.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.320 "is_configured": false, 00:15:25.320 "data_offset": 0, 00:15:25.320 "data_size": 0 00:15:25.320 }, 00:15:25.320 { 00:15:25.320 "name": "BaseBdev3", 00:15:25.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.320 "is_configured": false, 00:15:25.320 "data_offset": 0, 00:15:25.320 "data_size": 0 00:15:25.320 }, 00:15:25.320 { 00:15:25.320 "name": "BaseBdev4", 00:15:25.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.320 "is_configured": false, 00:15:25.320 "data_offset": 0, 00:15:25.320 "data_size": 0 00:15:25.320 } 00:15:25.320 ] 00:15:25.320 }' 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.320 13:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.887 [2024-11-26 13:28:14.284407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.887 [2024-11-26 13:28:14.284446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.887 [2024-11-26 13:28:14.292479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.887 [2024-11-26 13:28:14.294705] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.887 [2024-11-26 13:28:14.294752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.887 [2024-11-26 13:28:14.294765] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.887 [2024-11-26 13:28:14.294780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.887 [2024-11-26 13:28:14.294788] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:25.887 [2024-11-26 13:28:14.294799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.887 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.888 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.888 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.888 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.888 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.888 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.888 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.888 "name": "Existed_Raid", 00:15:25.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.888 "strip_size_kb": 64, 00:15:25.888 "state": "configuring", 00:15:25.888 "raid_level": "raid5f", 00:15:25.888 "superblock": false, 00:15:25.888 "num_base_bdevs": 4, 00:15:25.888 "num_base_bdevs_discovered": 1, 00:15:25.888 "num_base_bdevs_operational": 4, 00:15:25.888 "base_bdevs_list": [ 00:15:25.888 { 00:15:25.888 "name": "BaseBdev1", 00:15:25.888 "uuid": "ebb723d7-8055-46df-bbf5-6dd7ec32dda5", 00:15:25.888 "is_configured": true, 00:15:25.888 "data_offset": 0, 00:15:25.888 "data_size": 65536 00:15:25.888 }, 00:15:25.888 { 00:15:25.888 "name": "BaseBdev2", 00:15:25.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.888 "is_configured": false, 00:15:25.888 "data_offset": 0, 00:15:25.888 "data_size": 0 00:15:25.888 }, 00:15:25.888 { 00:15:25.888 "name": "BaseBdev3", 00:15:25.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.888 "is_configured": false, 00:15:25.888 "data_offset": 0, 00:15:25.888 "data_size": 0 00:15:25.888 }, 00:15:25.888 { 00:15:25.888 "name": "BaseBdev4", 00:15:25.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.888 "is_configured": false, 00:15:25.888 "data_offset": 0, 00:15:25.888 "data_size": 0 00:15:25.888 } 00:15:25.888 ] 00:15:25.888 }' 00:15:25.888 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.888 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.455 [2024-11-26 13:28:14.837322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.455 BaseBdev2 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.455 [ 00:15:26.455 { 00:15:26.455 "name": "BaseBdev2", 00:15:26.455 "aliases": [ 00:15:26.455 "891b328f-5380-423e-8d6a-708e86ac6f57" 00:15:26.455 ], 00:15:26.455 "product_name": "Malloc disk", 00:15:26.455 "block_size": 512, 00:15:26.455 "num_blocks": 65536, 00:15:26.455 "uuid": "891b328f-5380-423e-8d6a-708e86ac6f57", 00:15:26.455 "assigned_rate_limits": { 00:15:26.455 "rw_ios_per_sec": 0, 00:15:26.455 "rw_mbytes_per_sec": 0, 00:15:26.455 "r_mbytes_per_sec": 0, 00:15:26.455 "w_mbytes_per_sec": 0 00:15:26.455 }, 00:15:26.455 "claimed": true, 00:15:26.455 "claim_type": "exclusive_write", 00:15:26.455 "zoned": false, 00:15:26.455 "supported_io_types": { 00:15:26.455 "read": true, 00:15:26.455 "write": true, 00:15:26.455 "unmap": true, 00:15:26.455 "flush": true, 00:15:26.455 "reset": true, 00:15:26.455 "nvme_admin": false, 00:15:26.455 "nvme_io": false, 00:15:26.455 "nvme_io_md": false, 00:15:26.455 "write_zeroes": true, 00:15:26.455 "zcopy": true, 00:15:26.455 "get_zone_info": false, 00:15:26.455 "zone_management": false, 00:15:26.455 "zone_append": false, 00:15:26.455 "compare": false, 00:15:26.455 "compare_and_write": false, 00:15:26.455 "abort": true, 00:15:26.455 "seek_hole": false, 00:15:26.455 "seek_data": false, 00:15:26.455 "copy": true, 00:15:26.455 "nvme_iov_md": false 00:15:26.455 }, 00:15:26.455 "memory_domains": [ 00:15:26.455 { 00:15:26.455 "dma_device_id": "system", 00:15:26.455 "dma_device_type": 1 00:15:26.455 }, 00:15:26.455 { 00:15:26.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.455 "dma_device_type": 2 00:15:26.455 } 00:15:26.455 ], 00:15:26.455 "driver_specific": {} 00:15:26.455 } 00:15:26.455 ] 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.455 "name": "Existed_Raid", 00:15:26.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.455 "strip_size_kb": 64, 00:15:26.455 "state": "configuring", 00:15:26.455 "raid_level": "raid5f", 00:15:26.455 "superblock": false, 00:15:26.455 "num_base_bdevs": 4, 00:15:26.455 "num_base_bdevs_discovered": 2, 00:15:26.455 "num_base_bdevs_operational": 4, 00:15:26.455 "base_bdevs_list": [ 00:15:26.455 { 00:15:26.455 "name": "BaseBdev1", 00:15:26.455 "uuid": "ebb723d7-8055-46df-bbf5-6dd7ec32dda5", 00:15:26.455 "is_configured": true, 00:15:26.455 "data_offset": 0, 00:15:26.455 "data_size": 65536 00:15:26.455 }, 00:15:26.455 { 00:15:26.455 "name": "BaseBdev2", 00:15:26.455 "uuid": "891b328f-5380-423e-8d6a-708e86ac6f57", 00:15:26.455 "is_configured": true, 00:15:26.455 "data_offset": 0, 00:15:26.455 "data_size": 65536 00:15:26.455 }, 00:15:26.455 { 00:15:26.455 "name": "BaseBdev3", 00:15:26.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.455 "is_configured": false, 00:15:26.455 "data_offset": 0, 00:15:26.455 "data_size": 0 00:15:26.455 }, 00:15:26.455 { 00:15:26.455 "name": "BaseBdev4", 00:15:26.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.455 "is_configured": false, 00:15:26.455 "data_offset": 0, 00:15:26.455 "data_size": 0 00:15:26.455 } 00:15:26.455 ] 00:15:26.455 }' 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.455 13:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.023 [2024-11-26 13:28:15.437120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.023 BaseBdev3 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.023 [ 00:15:27.023 { 00:15:27.023 "name": "BaseBdev3", 00:15:27.023 "aliases": [ 00:15:27.023 "c1558ebb-eedc-4c43-9acf-d3bfb2c82f5e" 00:15:27.023 ], 00:15:27.023 "product_name": "Malloc disk", 00:15:27.023 "block_size": 512, 00:15:27.023 "num_blocks": 65536, 00:15:27.023 "uuid": "c1558ebb-eedc-4c43-9acf-d3bfb2c82f5e", 00:15:27.023 "assigned_rate_limits": { 00:15:27.023 "rw_ios_per_sec": 0, 00:15:27.023 "rw_mbytes_per_sec": 0, 00:15:27.023 "r_mbytes_per_sec": 0, 00:15:27.023 "w_mbytes_per_sec": 0 00:15:27.023 }, 00:15:27.023 "claimed": true, 00:15:27.023 "claim_type": "exclusive_write", 00:15:27.023 "zoned": false, 00:15:27.023 "supported_io_types": { 00:15:27.023 "read": true, 00:15:27.023 "write": true, 00:15:27.023 "unmap": true, 00:15:27.023 "flush": true, 00:15:27.023 "reset": true, 00:15:27.023 "nvme_admin": false, 00:15:27.023 "nvme_io": false, 00:15:27.023 "nvme_io_md": false, 00:15:27.023 "write_zeroes": true, 00:15:27.023 "zcopy": true, 00:15:27.023 "get_zone_info": false, 00:15:27.023 "zone_management": false, 00:15:27.023 "zone_append": false, 00:15:27.023 "compare": false, 00:15:27.023 "compare_and_write": false, 00:15:27.023 "abort": true, 00:15:27.023 "seek_hole": false, 00:15:27.023 "seek_data": false, 00:15:27.023 "copy": true, 00:15:27.023 "nvme_iov_md": false 00:15:27.023 }, 00:15:27.023 "memory_domains": [ 00:15:27.023 { 00:15:27.023 "dma_device_id": "system", 00:15:27.023 "dma_device_type": 1 00:15:27.023 }, 00:15:27.023 { 00:15:27.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.023 "dma_device_type": 2 00:15:27.023 } 00:15:27.023 ], 00:15:27.023 "driver_specific": {} 00:15:27.023 } 00:15:27.023 ] 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.023 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.024 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.024 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.024 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.024 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.024 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.024 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.024 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.024 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.024 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.024 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.024 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.024 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.024 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.024 "name": "Existed_Raid", 00:15:27.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.024 "strip_size_kb": 64, 00:15:27.024 "state": "configuring", 00:15:27.024 "raid_level": "raid5f", 00:15:27.024 "superblock": false, 00:15:27.024 "num_base_bdevs": 4, 00:15:27.024 "num_base_bdevs_discovered": 3, 00:15:27.024 "num_base_bdevs_operational": 4, 00:15:27.024 "base_bdevs_list": [ 00:15:27.024 { 00:15:27.024 "name": "BaseBdev1", 00:15:27.024 "uuid": "ebb723d7-8055-46df-bbf5-6dd7ec32dda5", 00:15:27.024 "is_configured": true, 00:15:27.024 "data_offset": 0, 00:15:27.024 "data_size": 65536 00:15:27.024 }, 00:15:27.024 { 00:15:27.024 "name": "BaseBdev2", 00:15:27.024 "uuid": "891b328f-5380-423e-8d6a-708e86ac6f57", 00:15:27.024 "is_configured": true, 00:15:27.024 "data_offset": 0, 00:15:27.024 "data_size": 65536 00:15:27.024 }, 00:15:27.024 { 00:15:27.024 "name": "BaseBdev3", 00:15:27.024 "uuid": "c1558ebb-eedc-4c43-9acf-d3bfb2c82f5e", 00:15:27.024 "is_configured": true, 00:15:27.024 "data_offset": 0, 00:15:27.024 "data_size": 65536 00:15:27.024 }, 00:15:27.024 { 00:15:27.024 "name": "BaseBdev4", 00:15:27.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.024 "is_configured": false, 00:15:27.024 "data_offset": 0, 00:15:27.024 "data_size": 0 00:15:27.024 } 00:15:27.024 ] 00:15:27.024 }' 00:15:27.024 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.024 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.591 13:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:27.591 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.591 13:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.591 [2024-11-26 13:28:16.009945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:27.591 [2024-11-26 13:28:16.010014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:27.591 [2024-11-26 13:28:16.010028] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:27.591 [2024-11-26 13:28:16.010352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:27.591 [2024-11-26 13:28:16.015942] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:27.591 [2024-11-26 13:28:16.015970] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:27.591 [2024-11-26 13:28:16.016251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.591 BaseBdev4 00:15:27.591 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.591 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:27.591 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:27.591 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.591 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:27.591 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.591 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.591 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.591 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.591 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.591 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.591 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:27.591 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.591 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.591 [ 00:15:27.591 { 00:15:27.591 "name": "BaseBdev4", 00:15:27.591 "aliases": [ 00:15:27.591 "dc98e3e4-729e-4dc1-a847-b2715b5a03ad" 00:15:27.591 ], 00:15:27.591 "product_name": "Malloc disk", 00:15:27.591 "block_size": 512, 00:15:27.591 "num_blocks": 65536, 00:15:27.591 "uuid": "dc98e3e4-729e-4dc1-a847-b2715b5a03ad", 00:15:27.591 "assigned_rate_limits": { 00:15:27.591 "rw_ios_per_sec": 0, 00:15:27.591 "rw_mbytes_per_sec": 0, 00:15:27.591 "r_mbytes_per_sec": 0, 00:15:27.591 "w_mbytes_per_sec": 0 00:15:27.591 }, 00:15:27.591 "claimed": true, 00:15:27.591 "claim_type": "exclusive_write", 00:15:27.591 "zoned": false, 00:15:27.591 "supported_io_types": { 00:15:27.591 "read": true, 00:15:27.591 "write": true, 00:15:27.591 "unmap": true, 00:15:27.591 "flush": true, 00:15:27.591 "reset": true, 00:15:27.591 "nvme_admin": false, 00:15:27.591 "nvme_io": false, 00:15:27.591 "nvme_io_md": false, 00:15:27.591 "write_zeroes": true, 00:15:27.591 "zcopy": true, 00:15:27.591 "get_zone_info": false, 00:15:27.591 "zone_management": false, 00:15:27.591 "zone_append": false, 00:15:27.591 "compare": false, 00:15:27.591 "compare_and_write": false, 00:15:27.591 "abort": true, 00:15:27.591 "seek_hole": false, 00:15:27.591 "seek_data": false, 00:15:27.591 "copy": true, 00:15:27.591 "nvme_iov_md": false 00:15:27.591 }, 00:15:27.591 "memory_domains": [ 00:15:27.591 { 00:15:27.592 "dma_device_id": "system", 00:15:27.592 "dma_device_type": 1 00:15:27.592 }, 00:15:27.592 { 00:15:27.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.592 "dma_device_type": 2 00:15:27.592 } 00:15:27.592 ], 00:15:27.592 "driver_specific": {} 00:15:27.592 } 00:15:27.592 ] 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.592 "name": "Existed_Raid", 00:15:27.592 "uuid": "3293bd50-357c-4759-bdc3-f621f40b18ec", 00:15:27.592 "strip_size_kb": 64, 00:15:27.592 "state": "online", 00:15:27.592 "raid_level": "raid5f", 00:15:27.592 "superblock": false, 00:15:27.592 "num_base_bdevs": 4, 00:15:27.592 "num_base_bdevs_discovered": 4, 00:15:27.592 "num_base_bdevs_operational": 4, 00:15:27.592 "base_bdevs_list": [ 00:15:27.592 { 00:15:27.592 "name": "BaseBdev1", 00:15:27.592 "uuid": "ebb723d7-8055-46df-bbf5-6dd7ec32dda5", 00:15:27.592 "is_configured": true, 00:15:27.592 "data_offset": 0, 00:15:27.592 "data_size": 65536 00:15:27.592 }, 00:15:27.592 { 00:15:27.592 "name": "BaseBdev2", 00:15:27.592 "uuid": "891b328f-5380-423e-8d6a-708e86ac6f57", 00:15:27.592 "is_configured": true, 00:15:27.592 "data_offset": 0, 00:15:27.592 "data_size": 65536 00:15:27.592 }, 00:15:27.592 { 00:15:27.592 "name": "BaseBdev3", 00:15:27.592 "uuid": "c1558ebb-eedc-4c43-9acf-d3bfb2c82f5e", 00:15:27.592 "is_configured": true, 00:15:27.592 "data_offset": 0, 00:15:27.592 "data_size": 65536 00:15:27.592 }, 00:15:27.592 { 00:15:27.592 "name": "BaseBdev4", 00:15:27.592 "uuid": "dc98e3e4-729e-4dc1-a847-b2715b5a03ad", 00:15:27.592 "is_configured": true, 00:15:27.592 "data_offset": 0, 00:15:27.592 "data_size": 65536 00:15:27.592 } 00:15:27.592 ] 00:15:27.592 }' 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.592 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.159 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:28.159 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:28.159 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:28.159 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:28.159 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:28.159 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:28.159 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:28.159 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:28.159 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.159 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.159 [2024-11-26 13:28:16.582873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.159 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.159 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:28.159 "name": "Existed_Raid", 00:15:28.159 "aliases": [ 00:15:28.159 "3293bd50-357c-4759-bdc3-f621f40b18ec" 00:15:28.159 ], 00:15:28.159 "product_name": "Raid Volume", 00:15:28.159 "block_size": 512, 00:15:28.159 "num_blocks": 196608, 00:15:28.159 "uuid": "3293bd50-357c-4759-bdc3-f621f40b18ec", 00:15:28.159 "assigned_rate_limits": { 00:15:28.159 "rw_ios_per_sec": 0, 00:15:28.159 "rw_mbytes_per_sec": 0, 00:15:28.159 "r_mbytes_per_sec": 0, 00:15:28.159 "w_mbytes_per_sec": 0 00:15:28.159 }, 00:15:28.159 "claimed": false, 00:15:28.159 "zoned": false, 00:15:28.159 "supported_io_types": { 00:15:28.159 "read": true, 00:15:28.159 "write": true, 00:15:28.159 "unmap": false, 00:15:28.159 "flush": false, 00:15:28.159 "reset": true, 00:15:28.159 "nvme_admin": false, 00:15:28.159 "nvme_io": false, 00:15:28.159 "nvme_io_md": false, 00:15:28.159 "write_zeroes": true, 00:15:28.159 "zcopy": false, 00:15:28.159 "get_zone_info": false, 00:15:28.159 "zone_management": false, 00:15:28.159 "zone_append": false, 00:15:28.159 "compare": false, 00:15:28.159 "compare_and_write": false, 00:15:28.159 "abort": false, 00:15:28.159 "seek_hole": false, 00:15:28.159 "seek_data": false, 00:15:28.159 "copy": false, 00:15:28.159 "nvme_iov_md": false 00:15:28.159 }, 00:15:28.159 "driver_specific": { 00:15:28.159 "raid": { 00:15:28.159 "uuid": "3293bd50-357c-4759-bdc3-f621f40b18ec", 00:15:28.159 "strip_size_kb": 64, 00:15:28.159 "state": "online", 00:15:28.159 "raid_level": "raid5f", 00:15:28.159 "superblock": false, 00:15:28.159 "num_base_bdevs": 4, 00:15:28.159 "num_base_bdevs_discovered": 4, 00:15:28.159 "num_base_bdevs_operational": 4, 00:15:28.159 "base_bdevs_list": [ 00:15:28.159 { 00:15:28.159 "name": "BaseBdev1", 00:15:28.159 "uuid": "ebb723d7-8055-46df-bbf5-6dd7ec32dda5", 00:15:28.159 "is_configured": true, 00:15:28.159 "data_offset": 0, 00:15:28.159 "data_size": 65536 00:15:28.159 }, 00:15:28.159 { 00:15:28.159 "name": "BaseBdev2", 00:15:28.159 "uuid": "891b328f-5380-423e-8d6a-708e86ac6f57", 00:15:28.159 "is_configured": true, 00:15:28.159 "data_offset": 0, 00:15:28.159 "data_size": 65536 00:15:28.159 }, 00:15:28.159 { 00:15:28.159 "name": "BaseBdev3", 00:15:28.159 "uuid": "c1558ebb-eedc-4c43-9acf-d3bfb2c82f5e", 00:15:28.159 "is_configured": true, 00:15:28.159 "data_offset": 0, 00:15:28.159 "data_size": 65536 00:15:28.159 }, 00:15:28.159 { 00:15:28.159 "name": "BaseBdev4", 00:15:28.159 "uuid": "dc98e3e4-729e-4dc1-a847-b2715b5a03ad", 00:15:28.159 "is_configured": true, 00:15:28.159 "data_offset": 0, 00:15:28.159 "data_size": 65536 00:15:28.159 } 00:15:28.159 ] 00:15:28.159 } 00:15:28.159 } 00:15:28.159 }' 00:15:28.159 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:28.159 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:28.159 BaseBdev2 00:15:28.159 BaseBdev3 00:15:28.159 BaseBdev4' 00:15:28.159 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.418 13:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.418 [2024-11-26 13:28:16.950818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.677 "name": "Existed_Raid", 00:15:28.677 "uuid": "3293bd50-357c-4759-bdc3-f621f40b18ec", 00:15:28.677 "strip_size_kb": 64, 00:15:28.677 "state": "online", 00:15:28.677 "raid_level": "raid5f", 00:15:28.677 "superblock": false, 00:15:28.677 "num_base_bdevs": 4, 00:15:28.677 "num_base_bdevs_discovered": 3, 00:15:28.677 "num_base_bdevs_operational": 3, 00:15:28.677 "base_bdevs_list": [ 00:15:28.677 { 00:15:28.677 "name": null, 00:15:28.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.677 "is_configured": false, 00:15:28.677 "data_offset": 0, 00:15:28.677 "data_size": 65536 00:15:28.677 }, 00:15:28.677 { 00:15:28.677 "name": "BaseBdev2", 00:15:28.677 "uuid": "891b328f-5380-423e-8d6a-708e86ac6f57", 00:15:28.677 "is_configured": true, 00:15:28.677 "data_offset": 0, 00:15:28.677 "data_size": 65536 00:15:28.677 }, 00:15:28.677 { 00:15:28.677 "name": "BaseBdev3", 00:15:28.677 "uuid": "c1558ebb-eedc-4c43-9acf-d3bfb2c82f5e", 00:15:28.677 "is_configured": true, 00:15:28.677 "data_offset": 0, 00:15:28.677 "data_size": 65536 00:15:28.677 }, 00:15:28.677 { 00:15:28.677 "name": "BaseBdev4", 00:15:28.677 "uuid": "dc98e3e4-729e-4dc1-a847-b2715b5a03ad", 00:15:28.677 "is_configured": true, 00:15:28.677 "data_offset": 0, 00:15:28.677 "data_size": 65536 00:15:28.677 } 00:15:28.677 ] 00:15:28.677 }' 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.677 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.243 [2024-11-26 13:28:17.600314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:29.243 [2024-11-26 13:28:17.600441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.243 [2024-11-26 13:28:17.666728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.243 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:29.244 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.244 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:29.244 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.244 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.244 [2024-11-26 13:28:17.730789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:29.244 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.244 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:29.244 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.244 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.244 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:29.244 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.244 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.503 [2024-11-26 13:28:17.860097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:29.503 [2024-11-26 13:28:17.860149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.503 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.503 BaseBdev2 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.503 [ 00:15:29.503 { 00:15:29.503 "name": "BaseBdev2", 00:15:29.503 "aliases": [ 00:15:29.503 "0f50dc1d-abfe-4ef4-8d3b-eb5623e73c7b" 00:15:29.503 ], 00:15:29.503 "product_name": "Malloc disk", 00:15:29.503 "block_size": 512, 00:15:29.503 "num_blocks": 65536, 00:15:29.503 "uuid": "0f50dc1d-abfe-4ef4-8d3b-eb5623e73c7b", 00:15:29.503 "assigned_rate_limits": { 00:15:29.503 "rw_ios_per_sec": 0, 00:15:29.503 "rw_mbytes_per_sec": 0, 00:15:29.503 "r_mbytes_per_sec": 0, 00:15:29.503 "w_mbytes_per_sec": 0 00:15:29.503 }, 00:15:29.503 "claimed": false, 00:15:29.503 "zoned": false, 00:15:29.503 "supported_io_types": { 00:15:29.503 "read": true, 00:15:29.503 "write": true, 00:15:29.503 "unmap": true, 00:15:29.503 "flush": true, 00:15:29.503 "reset": true, 00:15:29.503 "nvme_admin": false, 00:15:29.503 "nvme_io": false, 00:15:29.503 "nvme_io_md": false, 00:15:29.503 "write_zeroes": true, 00:15:29.503 "zcopy": true, 00:15:29.503 "get_zone_info": false, 00:15:29.503 "zone_management": false, 00:15:29.503 "zone_append": false, 00:15:29.503 "compare": false, 00:15:29.503 "compare_and_write": false, 00:15:29.503 "abort": true, 00:15:29.503 "seek_hole": false, 00:15:29.503 "seek_data": false, 00:15:29.503 "copy": true, 00:15:29.503 "nvme_iov_md": false 00:15:29.503 }, 00:15:29.503 "memory_domains": [ 00:15:29.503 { 00:15:29.503 "dma_device_id": "system", 00:15:29.503 "dma_device_type": 1 00:15:29.503 }, 00:15:29.503 { 00:15:29.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.503 "dma_device_type": 2 00:15:29.503 } 00:15:29.503 ], 00:15:29.503 "driver_specific": {} 00:15:29.503 } 00:15:29.503 ] 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.503 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:29.504 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:29.504 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:29.504 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:29.504 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.504 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.763 BaseBdev3 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.763 [ 00:15:29.763 { 00:15:29.763 "name": "BaseBdev3", 00:15:29.763 "aliases": [ 00:15:29.763 "869fd5f4-fbf0-4f13-80b0-6c3e70fda10e" 00:15:29.763 ], 00:15:29.763 "product_name": "Malloc disk", 00:15:29.763 "block_size": 512, 00:15:29.763 "num_blocks": 65536, 00:15:29.763 "uuid": "869fd5f4-fbf0-4f13-80b0-6c3e70fda10e", 00:15:29.763 "assigned_rate_limits": { 00:15:29.763 "rw_ios_per_sec": 0, 00:15:29.763 "rw_mbytes_per_sec": 0, 00:15:29.763 "r_mbytes_per_sec": 0, 00:15:29.763 "w_mbytes_per_sec": 0 00:15:29.763 }, 00:15:29.763 "claimed": false, 00:15:29.763 "zoned": false, 00:15:29.763 "supported_io_types": { 00:15:29.763 "read": true, 00:15:29.763 "write": true, 00:15:29.763 "unmap": true, 00:15:29.763 "flush": true, 00:15:29.763 "reset": true, 00:15:29.763 "nvme_admin": false, 00:15:29.763 "nvme_io": false, 00:15:29.763 "nvme_io_md": false, 00:15:29.763 "write_zeroes": true, 00:15:29.763 "zcopy": true, 00:15:29.763 "get_zone_info": false, 00:15:29.763 "zone_management": false, 00:15:29.763 "zone_append": false, 00:15:29.763 "compare": false, 00:15:29.763 "compare_and_write": false, 00:15:29.763 "abort": true, 00:15:29.763 "seek_hole": false, 00:15:29.763 "seek_data": false, 00:15:29.763 "copy": true, 00:15:29.763 "nvme_iov_md": false 00:15:29.763 }, 00:15:29.763 "memory_domains": [ 00:15:29.763 { 00:15:29.763 "dma_device_id": "system", 00:15:29.763 "dma_device_type": 1 00:15:29.763 }, 00:15:29.763 { 00:15:29.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.763 "dma_device_type": 2 00:15:29.763 } 00:15:29.763 ], 00:15:29.763 "driver_specific": {} 00:15:29.763 } 00:15:29.763 ] 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.763 BaseBdev4 00:15:29.763 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.764 [ 00:15:29.764 { 00:15:29.764 "name": "BaseBdev4", 00:15:29.764 "aliases": [ 00:15:29.764 "5927edff-efef-47bd-a9f8-7b51207516d3" 00:15:29.764 ], 00:15:29.764 "product_name": "Malloc disk", 00:15:29.764 "block_size": 512, 00:15:29.764 "num_blocks": 65536, 00:15:29.764 "uuid": "5927edff-efef-47bd-a9f8-7b51207516d3", 00:15:29.764 "assigned_rate_limits": { 00:15:29.764 "rw_ios_per_sec": 0, 00:15:29.764 "rw_mbytes_per_sec": 0, 00:15:29.764 "r_mbytes_per_sec": 0, 00:15:29.764 "w_mbytes_per_sec": 0 00:15:29.764 }, 00:15:29.764 "claimed": false, 00:15:29.764 "zoned": false, 00:15:29.764 "supported_io_types": { 00:15:29.764 "read": true, 00:15:29.764 "write": true, 00:15:29.764 "unmap": true, 00:15:29.764 "flush": true, 00:15:29.764 "reset": true, 00:15:29.764 "nvme_admin": false, 00:15:29.764 "nvme_io": false, 00:15:29.764 "nvme_io_md": false, 00:15:29.764 "write_zeroes": true, 00:15:29.764 "zcopy": true, 00:15:29.764 "get_zone_info": false, 00:15:29.764 "zone_management": false, 00:15:29.764 "zone_append": false, 00:15:29.764 "compare": false, 00:15:29.764 "compare_and_write": false, 00:15:29.764 "abort": true, 00:15:29.764 "seek_hole": false, 00:15:29.764 "seek_data": false, 00:15:29.764 "copy": true, 00:15:29.764 "nvme_iov_md": false 00:15:29.764 }, 00:15:29.764 "memory_domains": [ 00:15:29.764 { 00:15:29.764 "dma_device_id": "system", 00:15:29.764 "dma_device_type": 1 00:15:29.764 }, 00:15:29.764 { 00:15:29.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.764 "dma_device_type": 2 00:15:29.764 } 00:15:29.764 ], 00:15:29.764 "driver_specific": {} 00:15:29.764 } 00:15:29.764 ] 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.764 [2024-11-26 13:28:18.191924] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:29.764 [2024-11-26 13:28:18.192105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:29.764 [2024-11-26 13:28:18.192148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.764 [2024-11-26 13:28:18.194353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.764 [2024-11-26 13:28:18.194419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.764 "name": "Existed_Raid", 00:15:29.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.764 "strip_size_kb": 64, 00:15:29.764 "state": "configuring", 00:15:29.764 "raid_level": "raid5f", 00:15:29.764 "superblock": false, 00:15:29.764 "num_base_bdevs": 4, 00:15:29.764 "num_base_bdevs_discovered": 3, 00:15:29.764 "num_base_bdevs_operational": 4, 00:15:29.764 "base_bdevs_list": [ 00:15:29.764 { 00:15:29.764 "name": "BaseBdev1", 00:15:29.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.764 "is_configured": false, 00:15:29.764 "data_offset": 0, 00:15:29.764 "data_size": 0 00:15:29.764 }, 00:15:29.764 { 00:15:29.764 "name": "BaseBdev2", 00:15:29.764 "uuid": "0f50dc1d-abfe-4ef4-8d3b-eb5623e73c7b", 00:15:29.764 "is_configured": true, 00:15:29.764 "data_offset": 0, 00:15:29.764 "data_size": 65536 00:15:29.764 }, 00:15:29.764 { 00:15:29.764 "name": "BaseBdev3", 00:15:29.764 "uuid": "869fd5f4-fbf0-4f13-80b0-6c3e70fda10e", 00:15:29.764 "is_configured": true, 00:15:29.764 "data_offset": 0, 00:15:29.764 "data_size": 65536 00:15:29.764 }, 00:15:29.764 { 00:15:29.764 "name": "BaseBdev4", 00:15:29.764 "uuid": "5927edff-efef-47bd-a9f8-7b51207516d3", 00:15:29.764 "is_configured": true, 00:15:29.764 "data_offset": 0, 00:15:29.764 "data_size": 65536 00:15:29.764 } 00:15:29.764 ] 00:15:29.764 }' 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.764 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.331 [2024-11-26 13:28:18.723998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.331 "name": "Existed_Raid", 00:15:30.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.331 "strip_size_kb": 64, 00:15:30.331 "state": "configuring", 00:15:30.331 "raid_level": "raid5f", 00:15:30.331 "superblock": false, 00:15:30.331 "num_base_bdevs": 4, 00:15:30.331 "num_base_bdevs_discovered": 2, 00:15:30.331 "num_base_bdevs_operational": 4, 00:15:30.331 "base_bdevs_list": [ 00:15:30.331 { 00:15:30.331 "name": "BaseBdev1", 00:15:30.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.331 "is_configured": false, 00:15:30.331 "data_offset": 0, 00:15:30.331 "data_size": 0 00:15:30.331 }, 00:15:30.331 { 00:15:30.331 "name": null, 00:15:30.331 "uuid": "0f50dc1d-abfe-4ef4-8d3b-eb5623e73c7b", 00:15:30.331 "is_configured": false, 00:15:30.331 "data_offset": 0, 00:15:30.331 "data_size": 65536 00:15:30.331 }, 00:15:30.331 { 00:15:30.331 "name": "BaseBdev3", 00:15:30.331 "uuid": "869fd5f4-fbf0-4f13-80b0-6c3e70fda10e", 00:15:30.331 "is_configured": true, 00:15:30.331 "data_offset": 0, 00:15:30.331 "data_size": 65536 00:15:30.331 }, 00:15:30.331 { 00:15:30.331 "name": "BaseBdev4", 00:15:30.331 "uuid": "5927edff-efef-47bd-a9f8-7b51207516d3", 00:15:30.331 "is_configured": true, 00:15:30.331 "data_offset": 0, 00:15:30.331 "data_size": 65536 00:15:30.331 } 00:15:30.331 ] 00:15:30.331 }' 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.331 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.896 [2024-11-26 13:28:19.341217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.896 BaseBdev1 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.896 [ 00:15:30.896 { 00:15:30.896 "name": "BaseBdev1", 00:15:30.896 "aliases": [ 00:15:30.896 "4b7b5c9c-8bce-40fe-b1b7-ba6683df5b8b" 00:15:30.896 ], 00:15:30.896 "product_name": "Malloc disk", 00:15:30.896 "block_size": 512, 00:15:30.896 "num_blocks": 65536, 00:15:30.896 "uuid": "4b7b5c9c-8bce-40fe-b1b7-ba6683df5b8b", 00:15:30.896 "assigned_rate_limits": { 00:15:30.896 "rw_ios_per_sec": 0, 00:15:30.896 "rw_mbytes_per_sec": 0, 00:15:30.896 "r_mbytes_per_sec": 0, 00:15:30.896 "w_mbytes_per_sec": 0 00:15:30.896 }, 00:15:30.896 "claimed": true, 00:15:30.896 "claim_type": "exclusive_write", 00:15:30.896 "zoned": false, 00:15:30.896 "supported_io_types": { 00:15:30.896 "read": true, 00:15:30.896 "write": true, 00:15:30.896 "unmap": true, 00:15:30.896 "flush": true, 00:15:30.896 "reset": true, 00:15:30.896 "nvme_admin": false, 00:15:30.896 "nvme_io": false, 00:15:30.896 "nvme_io_md": false, 00:15:30.896 "write_zeroes": true, 00:15:30.896 "zcopy": true, 00:15:30.896 "get_zone_info": false, 00:15:30.896 "zone_management": false, 00:15:30.896 "zone_append": false, 00:15:30.896 "compare": false, 00:15:30.896 "compare_and_write": false, 00:15:30.896 "abort": true, 00:15:30.896 "seek_hole": false, 00:15:30.896 "seek_data": false, 00:15:30.896 "copy": true, 00:15:30.896 "nvme_iov_md": false 00:15:30.896 }, 00:15:30.896 "memory_domains": [ 00:15:30.896 { 00:15:30.896 "dma_device_id": "system", 00:15:30.896 "dma_device_type": 1 00:15:30.896 }, 00:15:30.896 { 00:15:30.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.896 "dma_device_type": 2 00:15:30.896 } 00:15:30.896 ], 00:15:30.896 "driver_specific": {} 00:15:30.896 } 00:15:30.896 ] 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.896 "name": "Existed_Raid", 00:15:30.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.896 "strip_size_kb": 64, 00:15:30.896 "state": "configuring", 00:15:30.896 "raid_level": "raid5f", 00:15:30.896 "superblock": false, 00:15:30.896 "num_base_bdevs": 4, 00:15:30.896 "num_base_bdevs_discovered": 3, 00:15:30.896 "num_base_bdevs_operational": 4, 00:15:30.896 "base_bdevs_list": [ 00:15:30.896 { 00:15:30.896 "name": "BaseBdev1", 00:15:30.896 "uuid": "4b7b5c9c-8bce-40fe-b1b7-ba6683df5b8b", 00:15:30.896 "is_configured": true, 00:15:30.896 "data_offset": 0, 00:15:30.896 "data_size": 65536 00:15:30.896 }, 00:15:30.896 { 00:15:30.896 "name": null, 00:15:30.896 "uuid": "0f50dc1d-abfe-4ef4-8d3b-eb5623e73c7b", 00:15:30.896 "is_configured": false, 00:15:30.896 "data_offset": 0, 00:15:30.896 "data_size": 65536 00:15:30.896 }, 00:15:30.896 { 00:15:30.896 "name": "BaseBdev3", 00:15:30.896 "uuid": "869fd5f4-fbf0-4f13-80b0-6c3e70fda10e", 00:15:30.896 "is_configured": true, 00:15:30.896 "data_offset": 0, 00:15:30.896 "data_size": 65536 00:15:30.896 }, 00:15:30.896 { 00:15:30.896 "name": "BaseBdev4", 00:15:30.896 "uuid": "5927edff-efef-47bd-a9f8-7b51207516d3", 00:15:30.896 "is_configured": true, 00:15:30.896 "data_offset": 0, 00:15:30.896 "data_size": 65536 00:15:30.896 } 00:15:30.896 ] 00:15:30.896 }' 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.896 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.464 [2024-11-26 13:28:19.953435] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.464 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.465 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.465 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.465 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.465 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.465 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.465 "name": "Existed_Raid", 00:15:31.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.465 "strip_size_kb": 64, 00:15:31.465 "state": "configuring", 00:15:31.465 "raid_level": "raid5f", 00:15:31.465 "superblock": false, 00:15:31.465 "num_base_bdevs": 4, 00:15:31.465 "num_base_bdevs_discovered": 2, 00:15:31.465 "num_base_bdevs_operational": 4, 00:15:31.465 "base_bdevs_list": [ 00:15:31.465 { 00:15:31.465 "name": "BaseBdev1", 00:15:31.465 "uuid": "4b7b5c9c-8bce-40fe-b1b7-ba6683df5b8b", 00:15:31.465 "is_configured": true, 00:15:31.465 "data_offset": 0, 00:15:31.465 "data_size": 65536 00:15:31.465 }, 00:15:31.465 { 00:15:31.465 "name": null, 00:15:31.465 "uuid": "0f50dc1d-abfe-4ef4-8d3b-eb5623e73c7b", 00:15:31.465 "is_configured": false, 00:15:31.465 "data_offset": 0, 00:15:31.465 "data_size": 65536 00:15:31.465 }, 00:15:31.465 { 00:15:31.465 "name": null, 00:15:31.465 "uuid": "869fd5f4-fbf0-4f13-80b0-6c3e70fda10e", 00:15:31.465 "is_configured": false, 00:15:31.465 "data_offset": 0, 00:15:31.465 "data_size": 65536 00:15:31.465 }, 00:15:31.465 { 00:15:31.465 "name": "BaseBdev4", 00:15:31.465 "uuid": "5927edff-efef-47bd-a9f8-7b51207516d3", 00:15:31.465 "is_configured": true, 00:15:31.465 "data_offset": 0, 00:15:31.465 "data_size": 65536 00:15:31.465 } 00:15:31.465 ] 00:15:31.465 }' 00:15:31.465 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.465 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.033 [2024-11-26 13:28:20.545583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.033 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.292 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.292 "name": "Existed_Raid", 00:15:32.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.292 "strip_size_kb": 64, 00:15:32.292 "state": "configuring", 00:15:32.292 "raid_level": "raid5f", 00:15:32.292 "superblock": false, 00:15:32.292 "num_base_bdevs": 4, 00:15:32.292 "num_base_bdevs_discovered": 3, 00:15:32.292 "num_base_bdevs_operational": 4, 00:15:32.292 "base_bdevs_list": [ 00:15:32.292 { 00:15:32.292 "name": "BaseBdev1", 00:15:32.292 "uuid": "4b7b5c9c-8bce-40fe-b1b7-ba6683df5b8b", 00:15:32.292 "is_configured": true, 00:15:32.292 "data_offset": 0, 00:15:32.292 "data_size": 65536 00:15:32.292 }, 00:15:32.292 { 00:15:32.292 "name": null, 00:15:32.292 "uuid": "0f50dc1d-abfe-4ef4-8d3b-eb5623e73c7b", 00:15:32.292 "is_configured": false, 00:15:32.292 "data_offset": 0, 00:15:32.292 "data_size": 65536 00:15:32.292 }, 00:15:32.292 { 00:15:32.292 "name": "BaseBdev3", 00:15:32.292 "uuid": "869fd5f4-fbf0-4f13-80b0-6c3e70fda10e", 00:15:32.292 "is_configured": true, 00:15:32.292 "data_offset": 0, 00:15:32.292 "data_size": 65536 00:15:32.292 }, 00:15:32.292 { 00:15:32.292 "name": "BaseBdev4", 00:15:32.292 "uuid": "5927edff-efef-47bd-a9f8-7b51207516d3", 00:15:32.292 "is_configured": true, 00:15:32.292 "data_offset": 0, 00:15:32.292 "data_size": 65536 00:15:32.292 } 00:15:32.292 ] 00:15:32.292 }' 00:15:32.292 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.292 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.551 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.551 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:32.551 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.551 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.551 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.810 [2024-11-26 13:28:21.137800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.810 "name": "Existed_Raid", 00:15:32.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.810 "strip_size_kb": 64, 00:15:32.810 "state": "configuring", 00:15:32.810 "raid_level": "raid5f", 00:15:32.810 "superblock": false, 00:15:32.810 "num_base_bdevs": 4, 00:15:32.810 "num_base_bdevs_discovered": 2, 00:15:32.810 "num_base_bdevs_operational": 4, 00:15:32.810 "base_bdevs_list": [ 00:15:32.810 { 00:15:32.810 "name": null, 00:15:32.810 "uuid": "4b7b5c9c-8bce-40fe-b1b7-ba6683df5b8b", 00:15:32.810 "is_configured": false, 00:15:32.810 "data_offset": 0, 00:15:32.810 "data_size": 65536 00:15:32.810 }, 00:15:32.810 { 00:15:32.810 "name": null, 00:15:32.810 "uuid": "0f50dc1d-abfe-4ef4-8d3b-eb5623e73c7b", 00:15:32.810 "is_configured": false, 00:15:32.810 "data_offset": 0, 00:15:32.810 "data_size": 65536 00:15:32.810 }, 00:15:32.810 { 00:15:32.810 "name": "BaseBdev3", 00:15:32.810 "uuid": "869fd5f4-fbf0-4f13-80b0-6c3e70fda10e", 00:15:32.810 "is_configured": true, 00:15:32.810 "data_offset": 0, 00:15:32.810 "data_size": 65536 00:15:32.810 }, 00:15:32.810 { 00:15:32.810 "name": "BaseBdev4", 00:15:32.810 "uuid": "5927edff-efef-47bd-a9f8-7b51207516d3", 00:15:32.810 "is_configured": true, 00:15:32.810 "data_offset": 0, 00:15:32.810 "data_size": 65536 00:15:32.810 } 00:15:32.810 ] 00:15:32.810 }' 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.810 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.376 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.377 [2024-11-26 13:28:21.782978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.377 "name": "Existed_Raid", 00:15:33.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.377 "strip_size_kb": 64, 00:15:33.377 "state": "configuring", 00:15:33.377 "raid_level": "raid5f", 00:15:33.377 "superblock": false, 00:15:33.377 "num_base_bdevs": 4, 00:15:33.377 "num_base_bdevs_discovered": 3, 00:15:33.377 "num_base_bdevs_operational": 4, 00:15:33.377 "base_bdevs_list": [ 00:15:33.377 { 00:15:33.377 "name": null, 00:15:33.377 "uuid": "4b7b5c9c-8bce-40fe-b1b7-ba6683df5b8b", 00:15:33.377 "is_configured": false, 00:15:33.377 "data_offset": 0, 00:15:33.377 "data_size": 65536 00:15:33.377 }, 00:15:33.377 { 00:15:33.377 "name": "BaseBdev2", 00:15:33.377 "uuid": "0f50dc1d-abfe-4ef4-8d3b-eb5623e73c7b", 00:15:33.377 "is_configured": true, 00:15:33.377 "data_offset": 0, 00:15:33.377 "data_size": 65536 00:15:33.377 }, 00:15:33.377 { 00:15:33.377 "name": "BaseBdev3", 00:15:33.377 "uuid": "869fd5f4-fbf0-4f13-80b0-6c3e70fda10e", 00:15:33.377 "is_configured": true, 00:15:33.377 "data_offset": 0, 00:15:33.377 "data_size": 65536 00:15:33.377 }, 00:15:33.377 { 00:15:33.377 "name": "BaseBdev4", 00:15:33.377 "uuid": "5927edff-efef-47bd-a9f8-7b51207516d3", 00:15:33.377 "is_configured": true, 00:15:33.377 "data_offset": 0, 00:15:33.377 "data_size": 65536 00:15:33.377 } 00:15:33.377 ] 00:15:33.377 }' 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.377 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.945 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:33.945 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.945 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.945 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.945 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.945 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:33.945 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.945 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.945 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.945 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:33.945 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.945 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4b7b5c9c-8bce-40fe-b1b7-ba6683df5b8b 00:15:33.945 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.945 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.945 [2024-11-26 13:28:22.462155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:33.945 [2024-11-26 13:28:22.462209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:33.945 [2024-11-26 13:28:22.462220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:33.945 [2024-11-26 13:28:22.462623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:33.945 [2024-11-26 13:28:22.468323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:33.945 [2024-11-26 13:28:22.468354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:33.945 [2024-11-26 13:28:22.468648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.946 NewBaseBdev 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.946 [ 00:15:33.946 { 00:15:33.946 "name": "NewBaseBdev", 00:15:33.946 "aliases": [ 00:15:33.946 "4b7b5c9c-8bce-40fe-b1b7-ba6683df5b8b" 00:15:33.946 ], 00:15:33.946 "product_name": "Malloc disk", 00:15:33.946 "block_size": 512, 00:15:33.946 "num_blocks": 65536, 00:15:33.946 "uuid": "4b7b5c9c-8bce-40fe-b1b7-ba6683df5b8b", 00:15:33.946 "assigned_rate_limits": { 00:15:33.946 "rw_ios_per_sec": 0, 00:15:33.946 "rw_mbytes_per_sec": 0, 00:15:33.946 "r_mbytes_per_sec": 0, 00:15:33.946 "w_mbytes_per_sec": 0 00:15:33.946 }, 00:15:33.946 "claimed": true, 00:15:33.946 "claim_type": "exclusive_write", 00:15:33.946 "zoned": false, 00:15:33.946 "supported_io_types": { 00:15:33.946 "read": true, 00:15:33.946 "write": true, 00:15:33.946 "unmap": true, 00:15:33.946 "flush": true, 00:15:33.946 "reset": true, 00:15:33.946 "nvme_admin": false, 00:15:33.946 "nvme_io": false, 00:15:33.946 "nvme_io_md": false, 00:15:33.946 "write_zeroes": true, 00:15:33.946 "zcopy": true, 00:15:33.946 "get_zone_info": false, 00:15:33.946 "zone_management": false, 00:15:33.946 "zone_append": false, 00:15:33.946 "compare": false, 00:15:33.946 "compare_and_write": false, 00:15:33.946 "abort": true, 00:15:33.946 "seek_hole": false, 00:15:33.946 "seek_data": false, 00:15:33.946 "copy": true, 00:15:33.946 "nvme_iov_md": false 00:15:33.946 }, 00:15:33.946 "memory_domains": [ 00:15:33.946 { 00:15:33.946 "dma_device_id": "system", 00:15:33.946 "dma_device_type": 1 00:15:33.946 }, 00:15:33.946 { 00:15:33.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.946 "dma_device_type": 2 00:15:33.946 } 00:15:33.946 ], 00:15:33.946 "driver_specific": {} 00:15:33.946 } 00:15:33.946 ] 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.946 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.220 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.220 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.220 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.220 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.220 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.220 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.220 "name": "Existed_Raid", 00:15:34.221 "uuid": "5d167f06-91b7-4f39-b2f5-20d2e297fab5", 00:15:34.221 "strip_size_kb": 64, 00:15:34.221 "state": "online", 00:15:34.221 "raid_level": "raid5f", 00:15:34.221 "superblock": false, 00:15:34.221 "num_base_bdevs": 4, 00:15:34.221 "num_base_bdevs_discovered": 4, 00:15:34.221 "num_base_bdevs_operational": 4, 00:15:34.221 "base_bdevs_list": [ 00:15:34.221 { 00:15:34.221 "name": "NewBaseBdev", 00:15:34.221 "uuid": "4b7b5c9c-8bce-40fe-b1b7-ba6683df5b8b", 00:15:34.221 "is_configured": true, 00:15:34.221 "data_offset": 0, 00:15:34.221 "data_size": 65536 00:15:34.221 }, 00:15:34.221 { 00:15:34.221 "name": "BaseBdev2", 00:15:34.221 "uuid": "0f50dc1d-abfe-4ef4-8d3b-eb5623e73c7b", 00:15:34.221 "is_configured": true, 00:15:34.221 "data_offset": 0, 00:15:34.221 "data_size": 65536 00:15:34.221 }, 00:15:34.221 { 00:15:34.221 "name": "BaseBdev3", 00:15:34.221 "uuid": "869fd5f4-fbf0-4f13-80b0-6c3e70fda10e", 00:15:34.221 "is_configured": true, 00:15:34.221 "data_offset": 0, 00:15:34.221 "data_size": 65536 00:15:34.221 }, 00:15:34.221 { 00:15:34.221 "name": "BaseBdev4", 00:15:34.221 "uuid": "5927edff-efef-47bd-a9f8-7b51207516d3", 00:15:34.221 "is_configured": true, 00:15:34.221 "data_offset": 0, 00:15:34.221 "data_size": 65536 00:15:34.221 } 00:15:34.221 ] 00:15:34.221 }' 00:15:34.221 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.221 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.490 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:34.490 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:34.490 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:34.490 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:34.490 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:34.490 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:34.490 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:34.490 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:34.490 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.490 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.490 [2024-11-26 13:28:23.023755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.490 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:34.748 "name": "Existed_Raid", 00:15:34.748 "aliases": [ 00:15:34.748 "5d167f06-91b7-4f39-b2f5-20d2e297fab5" 00:15:34.748 ], 00:15:34.748 "product_name": "Raid Volume", 00:15:34.748 "block_size": 512, 00:15:34.748 "num_blocks": 196608, 00:15:34.748 "uuid": "5d167f06-91b7-4f39-b2f5-20d2e297fab5", 00:15:34.748 "assigned_rate_limits": { 00:15:34.748 "rw_ios_per_sec": 0, 00:15:34.748 "rw_mbytes_per_sec": 0, 00:15:34.748 "r_mbytes_per_sec": 0, 00:15:34.748 "w_mbytes_per_sec": 0 00:15:34.748 }, 00:15:34.748 "claimed": false, 00:15:34.748 "zoned": false, 00:15:34.748 "supported_io_types": { 00:15:34.748 "read": true, 00:15:34.748 "write": true, 00:15:34.748 "unmap": false, 00:15:34.748 "flush": false, 00:15:34.748 "reset": true, 00:15:34.748 "nvme_admin": false, 00:15:34.748 "nvme_io": false, 00:15:34.748 "nvme_io_md": false, 00:15:34.748 "write_zeroes": true, 00:15:34.748 "zcopy": false, 00:15:34.748 "get_zone_info": false, 00:15:34.748 "zone_management": false, 00:15:34.748 "zone_append": false, 00:15:34.748 "compare": false, 00:15:34.748 "compare_and_write": false, 00:15:34.748 "abort": false, 00:15:34.748 "seek_hole": false, 00:15:34.748 "seek_data": false, 00:15:34.748 "copy": false, 00:15:34.748 "nvme_iov_md": false 00:15:34.748 }, 00:15:34.748 "driver_specific": { 00:15:34.748 "raid": { 00:15:34.748 "uuid": "5d167f06-91b7-4f39-b2f5-20d2e297fab5", 00:15:34.748 "strip_size_kb": 64, 00:15:34.748 "state": "online", 00:15:34.748 "raid_level": "raid5f", 00:15:34.748 "superblock": false, 00:15:34.748 "num_base_bdevs": 4, 00:15:34.748 "num_base_bdevs_discovered": 4, 00:15:34.748 "num_base_bdevs_operational": 4, 00:15:34.748 "base_bdevs_list": [ 00:15:34.748 { 00:15:34.748 "name": "NewBaseBdev", 00:15:34.748 "uuid": "4b7b5c9c-8bce-40fe-b1b7-ba6683df5b8b", 00:15:34.748 "is_configured": true, 00:15:34.748 "data_offset": 0, 00:15:34.748 "data_size": 65536 00:15:34.748 }, 00:15:34.748 { 00:15:34.748 "name": "BaseBdev2", 00:15:34.748 "uuid": "0f50dc1d-abfe-4ef4-8d3b-eb5623e73c7b", 00:15:34.748 "is_configured": true, 00:15:34.748 "data_offset": 0, 00:15:34.748 "data_size": 65536 00:15:34.748 }, 00:15:34.748 { 00:15:34.748 "name": "BaseBdev3", 00:15:34.748 "uuid": "869fd5f4-fbf0-4f13-80b0-6c3e70fda10e", 00:15:34.748 "is_configured": true, 00:15:34.748 "data_offset": 0, 00:15:34.748 "data_size": 65536 00:15:34.748 }, 00:15:34.748 { 00:15:34.748 "name": "BaseBdev4", 00:15:34.748 "uuid": "5927edff-efef-47bd-a9f8-7b51207516d3", 00:15:34.748 "is_configured": true, 00:15:34.748 "data_offset": 0, 00:15:34.748 "data_size": 65536 00:15:34.748 } 00:15:34.748 ] 00:15:34.748 } 00:15:34.748 } 00:15:34.748 }' 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:34.748 BaseBdev2 00:15:34.748 BaseBdev3 00:15:34.748 BaseBdev4' 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:34.748 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.749 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.749 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.006 [2024-11-26 13:28:23.383647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.006 [2024-11-26 13:28:23.383674] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.006 [2024-11-26 13:28:23.383743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.006 [2024-11-26 13:28:23.384044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.006 [2024-11-26 13:28:23.384059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82410 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82410 ']' 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82410 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82410 00:15:35.006 killing process with pid 82410 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82410' 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82410 00:15:35.006 [2024-11-26 13:28:23.422141] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.006 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82410 00:15:35.264 [2024-11-26 13:28:23.695535] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:36.201 00:15:36.201 real 0m12.399s 00:15:36.201 user 0m20.917s 00:15:36.201 sys 0m1.778s 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.201 ************************************ 00:15:36.201 END TEST raid5f_state_function_test 00:15:36.201 ************************************ 00:15:36.201 13:28:24 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:36.201 13:28:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:36.201 13:28:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.201 13:28:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:36.201 ************************************ 00:15:36.201 START TEST raid5f_state_function_test_sb 00:15:36.201 ************************************ 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.201 Process raid pid: 83082 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83082 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83082' 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83082 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83082 ']' 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.201 13:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.201 [2024-11-26 13:28:24.728907] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:15:36.201 [2024-11-26 13:28:24.729448] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.461 [2024-11-26 13:28:24.917487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.461 [2024-11-26 13:28:25.021978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.720 [2024-11-26 13:28:25.191809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.720 [2024-11-26 13:28:25.192099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.288 [2024-11-26 13:28:25.677476] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.288 [2024-11-26 13:28:25.677682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.288 [2024-11-26 13:28:25.677708] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.288 [2024-11-26 13:28:25.677725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.288 [2024-11-26 13:28:25.677734] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.288 [2024-11-26 13:28:25.677746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.288 [2024-11-26 13:28:25.677754] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:37.288 [2024-11-26 13:28:25.677766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.288 "name": "Existed_Raid", 00:15:37.288 "uuid": "cd4240ff-b1ed-4f61-b9d7-205ad81325ec", 00:15:37.288 "strip_size_kb": 64, 00:15:37.288 "state": "configuring", 00:15:37.288 "raid_level": "raid5f", 00:15:37.288 "superblock": true, 00:15:37.288 "num_base_bdevs": 4, 00:15:37.288 "num_base_bdevs_discovered": 0, 00:15:37.288 "num_base_bdevs_operational": 4, 00:15:37.288 "base_bdevs_list": [ 00:15:37.288 { 00:15:37.288 "name": "BaseBdev1", 00:15:37.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.288 "is_configured": false, 00:15:37.288 "data_offset": 0, 00:15:37.288 "data_size": 0 00:15:37.288 }, 00:15:37.288 { 00:15:37.288 "name": "BaseBdev2", 00:15:37.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.288 "is_configured": false, 00:15:37.288 "data_offset": 0, 00:15:37.288 "data_size": 0 00:15:37.288 }, 00:15:37.288 { 00:15:37.288 "name": "BaseBdev3", 00:15:37.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.288 "is_configured": false, 00:15:37.288 "data_offset": 0, 00:15:37.288 "data_size": 0 00:15:37.288 }, 00:15:37.288 { 00:15:37.288 "name": "BaseBdev4", 00:15:37.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.288 "is_configured": false, 00:15:37.288 "data_offset": 0, 00:15:37.288 "data_size": 0 00:15:37.288 } 00:15:37.288 ] 00:15:37.288 }' 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.288 13:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.860 [2024-11-26 13:28:26.145518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.860 [2024-11-26 13:28:26.145716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.860 [2024-11-26 13:28:26.153532] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.860 [2024-11-26 13:28:26.153591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.860 [2024-11-26 13:28:26.153619] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.860 [2024-11-26 13:28:26.153632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.860 [2024-11-26 13:28:26.153640] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.860 [2024-11-26 13:28:26.153651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.860 [2024-11-26 13:28:26.153658] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:37.860 [2024-11-26 13:28:26.153669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.860 [2024-11-26 13:28:26.192403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.860 BaseBdev1 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.860 [ 00:15:37.860 { 00:15:37.860 "name": "BaseBdev1", 00:15:37.860 "aliases": [ 00:15:37.860 "e6a8d647-1619-4aa6-9a05-0e410763e0bb" 00:15:37.860 ], 00:15:37.860 "product_name": "Malloc disk", 00:15:37.860 "block_size": 512, 00:15:37.860 "num_blocks": 65536, 00:15:37.860 "uuid": "e6a8d647-1619-4aa6-9a05-0e410763e0bb", 00:15:37.860 "assigned_rate_limits": { 00:15:37.860 "rw_ios_per_sec": 0, 00:15:37.860 "rw_mbytes_per_sec": 0, 00:15:37.860 "r_mbytes_per_sec": 0, 00:15:37.860 "w_mbytes_per_sec": 0 00:15:37.860 }, 00:15:37.860 "claimed": true, 00:15:37.860 "claim_type": "exclusive_write", 00:15:37.860 "zoned": false, 00:15:37.860 "supported_io_types": { 00:15:37.860 "read": true, 00:15:37.860 "write": true, 00:15:37.860 "unmap": true, 00:15:37.860 "flush": true, 00:15:37.860 "reset": true, 00:15:37.860 "nvme_admin": false, 00:15:37.860 "nvme_io": false, 00:15:37.860 "nvme_io_md": false, 00:15:37.860 "write_zeroes": true, 00:15:37.860 "zcopy": true, 00:15:37.860 "get_zone_info": false, 00:15:37.860 "zone_management": false, 00:15:37.860 "zone_append": false, 00:15:37.860 "compare": false, 00:15:37.860 "compare_and_write": false, 00:15:37.860 "abort": true, 00:15:37.860 "seek_hole": false, 00:15:37.860 "seek_data": false, 00:15:37.860 "copy": true, 00:15:37.860 "nvme_iov_md": false 00:15:37.860 }, 00:15:37.860 "memory_domains": [ 00:15:37.860 { 00:15:37.860 "dma_device_id": "system", 00:15:37.860 "dma_device_type": 1 00:15:37.860 }, 00:15:37.860 { 00:15:37.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.860 "dma_device_type": 2 00:15:37.860 } 00:15:37.860 ], 00:15:37.860 "driver_specific": {} 00:15:37.860 } 00:15:37.860 ] 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.860 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.861 "name": "Existed_Raid", 00:15:37.861 "uuid": "87669b19-ef3c-4325-b5c3-3ecdc1f8a61a", 00:15:37.861 "strip_size_kb": 64, 00:15:37.861 "state": "configuring", 00:15:37.861 "raid_level": "raid5f", 00:15:37.861 "superblock": true, 00:15:37.861 "num_base_bdevs": 4, 00:15:37.861 "num_base_bdevs_discovered": 1, 00:15:37.861 "num_base_bdevs_operational": 4, 00:15:37.861 "base_bdevs_list": [ 00:15:37.861 { 00:15:37.861 "name": "BaseBdev1", 00:15:37.861 "uuid": "e6a8d647-1619-4aa6-9a05-0e410763e0bb", 00:15:37.861 "is_configured": true, 00:15:37.861 "data_offset": 2048, 00:15:37.861 "data_size": 63488 00:15:37.861 }, 00:15:37.861 { 00:15:37.861 "name": "BaseBdev2", 00:15:37.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.861 "is_configured": false, 00:15:37.861 "data_offset": 0, 00:15:37.861 "data_size": 0 00:15:37.861 }, 00:15:37.861 { 00:15:37.861 "name": "BaseBdev3", 00:15:37.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.861 "is_configured": false, 00:15:37.861 "data_offset": 0, 00:15:37.861 "data_size": 0 00:15:37.861 }, 00:15:37.861 { 00:15:37.861 "name": "BaseBdev4", 00:15:37.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.861 "is_configured": false, 00:15:37.861 "data_offset": 0, 00:15:37.861 "data_size": 0 00:15:37.861 } 00:15:37.861 ] 00:15:37.861 }' 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.861 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.428 [2024-11-26 13:28:26.744571] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:38.428 [2024-11-26 13:28:26.744656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.428 [2024-11-26 13:28:26.752652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.428 [2024-11-26 13:28:26.754898] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:38.428 [2024-11-26 13:28:26.754947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:38.428 [2024-11-26 13:28:26.754961] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:38.428 [2024-11-26 13:28:26.754976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:38.428 [2024-11-26 13:28:26.754986] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:38.428 [2024-11-26 13:28:26.754997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.428 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.428 "name": "Existed_Raid", 00:15:38.428 "uuid": "cf2b9db6-b5f6-4afa-84ea-533cd8fe9bc4", 00:15:38.428 "strip_size_kb": 64, 00:15:38.428 "state": "configuring", 00:15:38.428 "raid_level": "raid5f", 00:15:38.428 "superblock": true, 00:15:38.428 "num_base_bdevs": 4, 00:15:38.428 "num_base_bdevs_discovered": 1, 00:15:38.428 "num_base_bdevs_operational": 4, 00:15:38.428 "base_bdevs_list": [ 00:15:38.428 { 00:15:38.428 "name": "BaseBdev1", 00:15:38.428 "uuid": "e6a8d647-1619-4aa6-9a05-0e410763e0bb", 00:15:38.428 "is_configured": true, 00:15:38.428 "data_offset": 2048, 00:15:38.428 "data_size": 63488 00:15:38.428 }, 00:15:38.428 { 00:15:38.428 "name": "BaseBdev2", 00:15:38.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.428 "is_configured": false, 00:15:38.428 "data_offset": 0, 00:15:38.428 "data_size": 0 00:15:38.428 }, 00:15:38.428 { 00:15:38.428 "name": "BaseBdev3", 00:15:38.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.428 "is_configured": false, 00:15:38.428 "data_offset": 0, 00:15:38.428 "data_size": 0 00:15:38.428 }, 00:15:38.428 { 00:15:38.428 "name": "BaseBdev4", 00:15:38.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.429 "is_configured": false, 00:15:38.429 "data_offset": 0, 00:15:38.429 "data_size": 0 00:15:38.429 } 00:15:38.429 ] 00:15:38.429 }' 00:15:38.429 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.429 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.996 [2024-11-26 13:28:27.314317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.996 BaseBdev2 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.996 [ 00:15:38.996 { 00:15:38.996 "name": "BaseBdev2", 00:15:38.996 "aliases": [ 00:15:38.996 "b63037e9-b369-4ccc-bf6b-864b60887ae1" 00:15:38.996 ], 00:15:38.996 "product_name": "Malloc disk", 00:15:38.996 "block_size": 512, 00:15:38.996 "num_blocks": 65536, 00:15:38.996 "uuid": "b63037e9-b369-4ccc-bf6b-864b60887ae1", 00:15:38.996 "assigned_rate_limits": { 00:15:38.996 "rw_ios_per_sec": 0, 00:15:38.996 "rw_mbytes_per_sec": 0, 00:15:38.996 "r_mbytes_per_sec": 0, 00:15:38.996 "w_mbytes_per_sec": 0 00:15:38.996 }, 00:15:38.996 "claimed": true, 00:15:38.996 "claim_type": "exclusive_write", 00:15:38.996 "zoned": false, 00:15:38.996 "supported_io_types": { 00:15:38.996 "read": true, 00:15:38.996 "write": true, 00:15:38.996 "unmap": true, 00:15:38.996 "flush": true, 00:15:38.996 "reset": true, 00:15:38.996 "nvme_admin": false, 00:15:38.996 "nvme_io": false, 00:15:38.996 "nvme_io_md": false, 00:15:38.996 "write_zeroes": true, 00:15:38.996 "zcopy": true, 00:15:38.996 "get_zone_info": false, 00:15:38.996 "zone_management": false, 00:15:38.996 "zone_append": false, 00:15:38.996 "compare": false, 00:15:38.996 "compare_and_write": false, 00:15:38.996 "abort": true, 00:15:38.996 "seek_hole": false, 00:15:38.996 "seek_data": false, 00:15:38.996 "copy": true, 00:15:38.996 "nvme_iov_md": false 00:15:38.996 }, 00:15:38.996 "memory_domains": [ 00:15:38.996 { 00:15:38.996 "dma_device_id": "system", 00:15:38.996 "dma_device_type": 1 00:15:38.996 }, 00:15:38.996 { 00:15:38.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.996 "dma_device_type": 2 00:15:38.996 } 00:15:38.996 ], 00:15:38.996 "driver_specific": {} 00:15:38.996 } 00:15:38.996 ] 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.996 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.997 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.997 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.997 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.997 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.997 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.997 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.997 "name": "Existed_Raid", 00:15:38.997 "uuid": "cf2b9db6-b5f6-4afa-84ea-533cd8fe9bc4", 00:15:38.997 "strip_size_kb": 64, 00:15:38.997 "state": "configuring", 00:15:38.997 "raid_level": "raid5f", 00:15:38.997 "superblock": true, 00:15:38.997 "num_base_bdevs": 4, 00:15:38.997 "num_base_bdevs_discovered": 2, 00:15:38.997 "num_base_bdevs_operational": 4, 00:15:38.997 "base_bdevs_list": [ 00:15:38.997 { 00:15:38.997 "name": "BaseBdev1", 00:15:38.997 "uuid": "e6a8d647-1619-4aa6-9a05-0e410763e0bb", 00:15:38.997 "is_configured": true, 00:15:38.997 "data_offset": 2048, 00:15:38.997 "data_size": 63488 00:15:38.997 }, 00:15:38.997 { 00:15:38.997 "name": "BaseBdev2", 00:15:38.997 "uuid": "b63037e9-b369-4ccc-bf6b-864b60887ae1", 00:15:38.997 "is_configured": true, 00:15:38.997 "data_offset": 2048, 00:15:38.997 "data_size": 63488 00:15:38.997 }, 00:15:38.997 { 00:15:38.997 "name": "BaseBdev3", 00:15:38.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.997 "is_configured": false, 00:15:38.997 "data_offset": 0, 00:15:38.997 "data_size": 0 00:15:38.997 }, 00:15:38.997 { 00:15:38.997 "name": "BaseBdev4", 00:15:38.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.997 "is_configured": false, 00:15:38.997 "data_offset": 0, 00:15:38.997 "data_size": 0 00:15:38.997 } 00:15:38.997 ] 00:15:38.997 }' 00:15:38.997 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.997 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.565 [2024-11-26 13:28:27.923216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.565 BaseBdev3 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.565 [ 00:15:39.565 { 00:15:39.565 "name": "BaseBdev3", 00:15:39.565 "aliases": [ 00:15:39.565 "045bc624-85d9-4580-b2ba-55ab8e7ad14d" 00:15:39.565 ], 00:15:39.565 "product_name": "Malloc disk", 00:15:39.565 "block_size": 512, 00:15:39.565 "num_blocks": 65536, 00:15:39.565 "uuid": "045bc624-85d9-4580-b2ba-55ab8e7ad14d", 00:15:39.565 "assigned_rate_limits": { 00:15:39.565 "rw_ios_per_sec": 0, 00:15:39.565 "rw_mbytes_per_sec": 0, 00:15:39.565 "r_mbytes_per_sec": 0, 00:15:39.565 "w_mbytes_per_sec": 0 00:15:39.565 }, 00:15:39.565 "claimed": true, 00:15:39.565 "claim_type": "exclusive_write", 00:15:39.565 "zoned": false, 00:15:39.565 "supported_io_types": { 00:15:39.565 "read": true, 00:15:39.565 "write": true, 00:15:39.565 "unmap": true, 00:15:39.565 "flush": true, 00:15:39.565 "reset": true, 00:15:39.565 "nvme_admin": false, 00:15:39.565 "nvme_io": false, 00:15:39.565 "nvme_io_md": false, 00:15:39.565 "write_zeroes": true, 00:15:39.565 "zcopy": true, 00:15:39.565 "get_zone_info": false, 00:15:39.565 "zone_management": false, 00:15:39.565 "zone_append": false, 00:15:39.565 "compare": false, 00:15:39.565 "compare_and_write": false, 00:15:39.565 "abort": true, 00:15:39.565 "seek_hole": false, 00:15:39.565 "seek_data": false, 00:15:39.565 "copy": true, 00:15:39.565 "nvme_iov_md": false 00:15:39.565 }, 00:15:39.565 "memory_domains": [ 00:15:39.565 { 00:15:39.565 "dma_device_id": "system", 00:15:39.565 "dma_device_type": 1 00:15:39.565 }, 00:15:39.565 { 00:15:39.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.565 "dma_device_type": 2 00:15:39.565 } 00:15:39.565 ], 00:15:39.565 "driver_specific": {} 00:15:39.565 } 00:15:39.565 ] 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.565 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.565 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.565 "name": "Existed_Raid", 00:15:39.565 "uuid": "cf2b9db6-b5f6-4afa-84ea-533cd8fe9bc4", 00:15:39.565 "strip_size_kb": 64, 00:15:39.565 "state": "configuring", 00:15:39.565 "raid_level": "raid5f", 00:15:39.565 "superblock": true, 00:15:39.565 "num_base_bdevs": 4, 00:15:39.565 "num_base_bdevs_discovered": 3, 00:15:39.565 "num_base_bdevs_operational": 4, 00:15:39.565 "base_bdevs_list": [ 00:15:39.565 { 00:15:39.565 "name": "BaseBdev1", 00:15:39.565 "uuid": "e6a8d647-1619-4aa6-9a05-0e410763e0bb", 00:15:39.565 "is_configured": true, 00:15:39.565 "data_offset": 2048, 00:15:39.565 "data_size": 63488 00:15:39.565 }, 00:15:39.565 { 00:15:39.565 "name": "BaseBdev2", 00:15:39.565 "uuid": "b63037e9-b369-4ccc-bf6b-864b60887ae1", 00:15:39.565 "is_configured": true, 00:15:39.565 "data_offset": 2048, 00:15:39.565 "data_size": 63488 00:15:39.565 }, 00:15:39.565 { 00:15:39.565 "name": "BaseBdev3", 00:15:39.565 "uuid": "045bc624-85d9-4580-b2ba-55ab8e7ad14d", 00:15:39.565 "is_configured": true, 00:15:39.565 "data_offset": 2048, 00:15:39.565 "data_size": 63488 00:15:39.565 }, 00:15:39.565 { 00:15:39.565 "name": "BaseBdev4", 00:15:39.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.565 "is_configured": false, 00:15:39.565 "data_offset": 0, 00:15:39.565 "data_size": 0 00:15:39.565 } 00:15:39.565 ] 00:15:39.565 }' 00:15:39.565 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.565 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.132 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:40.132 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.132 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.132 [2024-11-26 13:28:28.516780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:40.132 [2024-11-26 13:28:28.517051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:40.132 [2024-11-26 13:28:28.517069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:40.132 [2024-11-26 13:28:28.517424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:40.132 BaseBdev4 00:15:40.132 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.132 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:40.132 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:40.132 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:40.132 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:40.132 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:40.132 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:40.132 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:40.132 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.132 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.133 [2024-11-26 13:28:28.523453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:40.133 [2024-11-26 13:28:28.523480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:40.133 [2024-11-26 13:28:28.523722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.133 [ 00:15:40.133 { 00:15:40.133 "name": "BaseBdev4", 00:15:40.133 "aliases": [ 00:15:40.133 "44cffab0-4ee3-495e-ba4f-746c4d7c6563" 00:15:40.133 ], 00:15:40.133 "product_name": "Malloc disk", 00:15:40.133 "block_size": 512, 00:15:40.133 "num_blocks": 65536, 00:15:40.133 "uuid": "44cffab0-4ee3-495e-ba4f-746c4d7c6563", 00:15:40.133 "assigned_rate_limits": { 00:15:40.133 "rw_ios_per_sec": 0, 00:15:40.133 "rw_mbytes_per_sec": 0, 00:15:40.133 "r_mbytes_per_sec": 0, 00:15:40.133 "w_mbytes_per_sec": 0 00:15:40.133 }, 00:15:40.133 "claimed": true, 00:15:40.133 "claim_type": "exclusive_write", 00:15:40.133 "zoned": false, 00:15:40.133 "supported_io_types": { 00:15:40.133 "read": true, 00:15:40.133 "write": true, 00:15:40.133 "unmap": true, 00:15:40.133 "flush": true, 00:15:40.133 "reset": true, 00:15:40.133 "nvme_admin": false, 00:15:40.133 "nvme_io": false, 00:15:40.133 "nvme_io_md": false, 00:15:40.133 "write_zeroes": true, 00:15:40.133 "zcopy": true, 00:15:40.133 "get_zone_info": false, 00:15:40.133 "zone_management": false, 00:15:40.133 "zone_append": false, 00:15:40.133 "compare": false, 00:15:40.133 "compare_and_write": false, 00:15:40.133 "abort": true, 00:15:40.133 "seek_hole": false, 00:15:40.133 "seek_data": false, 00:15:40.133 "copy": true, 00:15:40.133 "nvme_iov_md": false 00:15:40.133 }, 00:15:40.133 "memory_domains": [ 00:15:40.133 { 00:15:40.133 "dma_device_id": "system", 00:15:40.133 "dma_device_type": 1 00:15:40.133 }, 00:15:40.133 { 00:15:40.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.133 "dma_device_type": 2 00:15:40.133 } 00:15:40.133 ], 00:15:40.133 "driver_specific": {} 00:15:40.133 } 00:15:40.133 ] 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.133 "name": "Existed_Raid", 00:15:40.133 "uuid": "cf2b9db6-b5f6-4afa-84ea-533cd8fe9bc4", 00:15:40.133 "strip_size_kb": 64, 00:15:40.133 "state": "online", 00:15:40.133 "raid_level": "raid5f", 00:15:40.133 "superblock": true, 00:15:40.133 "num_base_bdevs": 4, 00:15:40.133 "num_base_bdevs_discovered": 4, 00:15:40.133 "num_base_bdevs_operational": 4, 00:15:40.133 "base_bdevs_list": [ 00:15:40.133 { 00:15:40.133 "name": "BaseBdev1", 00:15:40.133 "uuid": "e6a8d647-1619-4aa6-9a05-0e410763e0bb", 00:15:40.133 "is_configured": true, 00:15:40.133 "data_offset": 2048, 00:15:40.133 "data_size": 63488 00:15:40.133 }, 00:15:40.133 { 00:15:40.133 "name": "BaseBdev2", 00:15:40.133 "uuid": "b63037e9-b369-4ccc-bf6b-864b60887ae1", 00:15:40.133 "is_configured": true, 00:15:40.133 "data_offset": 2048, 00:15:40.133 "data_size": 63488 00:15:40.133 }, 00:15:40.133 { 00:15:40.133 "name": "BaseBdev3", 00:15:40.133 "uuid": "045bc624-85d9-4580-b2ba-55ab8e7ad14d", 00:15:40.133 "is_configured": true, 00:15:40.133 "data_offset": 2048, 00:15:40.133 "data_size": 63488 00:15:40.133 }, 00:15:40.133 { 00:15:40.133 "name": "BaseBdev4", 00:15:40.133 "uuid": "44cffab0-4ee3-495e-ba4f-746c4d7c6563", 00:15:40.133 "is_configured": true, 00:15:40.133 "data_offset": 2048, 00:15:40.133 "data_size": 63488 00:15:40.133 } 00:15:40.133 ] 00:15:40.133 }' 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.133 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.700 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.701 [2024-11-26 13:28:29.094096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:40.701 "name": "Existed_Raid", 00:15:40.701 "aliases": [ 00:15:40.701 "cf2b9db6-b5f6-4afa-84ea-533cd8fe9bc4" 00:15:40.701 ], 00:15:40.701 "product_name": "Raid Volume", 00:15:40.701 "block_size": 512, 00:15:40.701 "num_blocks": 190464, 00:15:40.701 "uuid": "cf2b9db6-b5f6-4afa-84ea-533cd8fe9bc4", 00:15:40.701 "assigned_rate_limits": { 00:15:40.701 "rw_ios_per_sec": 0, 00:15:40.701 "rw_mbytes_per_sec": 0, 00:15:40.701 "r_mbytes_per_sec": 0, 00:15:40.701 "w_mbytes_per_sec": 0 00:15:40.701 }, 00:15:40.701 "claimed": false, 00:15:40.701 "zoned": false, 00:15:40.701 "supported_io_types": { 00:15:40.701 "read": true, 00:15:40.701 "write": true, 00:15:40.701 "unmap": false, 00:15:40.701 "flush": false, 00:15:40.701 "reset": true, 00:15:40.701 "nvme_admin": false, 00:15:40.701 "nvme_io": false, 00:15:40.701 "nvme_io_md": false, 00:15:40.701 "write_zeroes": true, 00:15:40.701 "zcopy": false, 00:15:40.701 "get_zone_info": false, 00:15:40.701 "zone_management": false, 00:15:40.701 "zone_append": false, 00:15:40.701 "compare": false, 00:15:40.701 "compare_and_write": false, 00:15:40.701 "abort": false, 00:15:40.701 "seek_hole": false, 00:15:40.701 "seek_data": false, 00:15:40.701 "copy": false, 00:15:40.701 "nvme_iov_md": false 00:15:40.701 }, 00:15:40.701 "driver_specific": { 00:15:40.701 "raid": { 00:15:40.701 "uuid": "cf2b9db6-b5f6-4afa-84ea-533cd8fe9bc4", 00:15:40.701 "strip_size_kb": 64, 00:15:40.701 "state": "online", 00:15:40.701 "raid_level": "raid5f", 00:15:40.701 "superblock": true, 00:15:40.701 "num_base_bdevs": 4, 00:15:40.701 "num_base_bdevs_discovered": 4, 00:15:40.701 "num_base_bdevs_operational": 4, 00:15:40.701 "base_bdevs_list": [ 00:15:40.701 { 00:15:40.701 "name": "BaseBdev1", 00:15:40.701 "uuid": "e6a8d647-1619-4aa6-9a05-0e410763e0bb", 00:15:40.701 "is_configured": true, 00:15:40.701 "data_offset": 2048, 00:15:40.701 "data_size": 63488 00:15:40.701 }, 00:15:40.701 { 00:15:40.701 "name": "BaseBdev2", 00:15:40.701 "uuid": "b63037e9-b369-4ccc-bf6b-864b60887ae1", 00:15:40.701 "is_configured": true, 00:15:40.701 "data_offset": 2048, 00:15:40.701 "data_size": 63488 00:15:40.701 }, 00:15:40.701 { 00:15:40.701 "name": "BaseBdev3", 00:15:40.701 "uuid": "045bc624-85d9-4580-b2ba-55ab8e7ad14d", 00:15:40.701 "is_configured": true, 00:15:40.701 "data_offset": 2048, 00:15:40.701 "data_size": 63488 00:15:40.701 }, 00:15:40.701 { 00:15:40.701 "name": "BaseBdev4", 00:15:40.701 "uuid": "44cffab0-4ee3-495e-ba4f-746c4d7c6563", 00:15:40.701 "is_configured": true, 00:15:40.701 "data_offset": 2048, 00:15:40.701 "data_size": 63488 00:15:40.701 } 00:15:40.701 ] 00:15:40.701 } 00:15:40.701 } 00:15:40.701 }' 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:40.701 BaseBdev2 00:15:40.701 BaseBdev3 00:15:40.701 BaseBdev4' 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.701 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.960 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.960 [2024-11-26 13:28:29.470029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.219 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.220 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.220 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.220 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.220 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.220 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.220 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.220 "name": "Existed_Raid", 00:15:41.220 "uuid": "cf2b9db6-b5f6-4afa-84ea-533cd8fe9bc4", 00:15:41.220 "strip_size_kb": 64, 00:15:41.220 "state": "online", 00:15:41.220 "raid_level": "raid5f", 00:15:41.220 "superblock": true, 00:15:41.220 "num_base_bdevs": 4, 00:15:41.220 "num_base_bdevs_discovered": 3, 00:15:41.220 "num_base_bdevs_operational": 3, 00:15:41.220 "base_bdevs_list": [ 00:15:41.220 { 00:15:41.220 "name": null, 00:15:41.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.220 "is_configured": false, 00:15:41.220 "data_offset": 0, 00:15:41.220 "data_size": 63488 00:15:41.220 }, 00:15:41.220 { 00:15:41.220 "name": "BaseBdev2", 00:15:41.220 "uuid": "b63037e9-b369-4ccc-bf6b-864b60887ae1", 00:15:41.220 "is_configured": true, 00:15:41.220 "data_offset": 2048, 00:15:41.220 "data_size": 63488 00:15:41.220 }, 00:15:41.220 { 00:15:41.220 "name": "BaseBdev3", 00:15:41.220 "uuid": "045bc624-85d9-4580-b2ba-55ab8e7ad14d", 00:15:41.220 "is_configured": true, 00:15:41.220 "data_offset": 2048, 00:15:41.220 "data_size": 63488 00:15:41.220 }, 00:15:41.220 { 00:15:41.220 "name": "BaseBdev4", 00:15:41.220 "uuid": "44cffab0-4ee3-495e-ba4f-746c4d7c6563", 00:15:41.220 "is_configured": true, 00:15:41.220 "data_offset": 2048, 00:15:41.220 "data_size": 63488 00:15:41.220 } 00:15:41.220 ] 00:15:41.220 }' 00:15:41.220 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.220 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.787 [2024-11-26 13:28:30.110474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:41.787 [2024-11-26 13:28:30.110831] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.787 [2024-11-26 13:28:30.175041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.787 [2024-11-26 13:28:30.231084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:41.787 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.047 [2024-11-26 13:28:30.358083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:42.047 [2024-11-26 13:28:30.358316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.047 BaseBdev2 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.047 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.047 [ 00:15:42.047 { 00:15:42.047 "name": "BaseBdev2", 00:15:42.047 "aliases": [ 00:15:42.047 "65ea0a8c-931e-4f36-8f89-26ff610b8594" 00:15:42.047 ], 00:15:42.047 "product_name": "Malloc disk", 00:15:42.047 "block_size": 512, 00:15:42.047 "num_blocks": 65536, 00:15:42.047 "uuid": "65ea0a8c-931e-4f36-8f89-26ff610b8594", 00:15:42.047 "assigned_rate_limits": { 00:15:42.047 "rw_ios_per_sec": 0, 00:15:42.047 "rw_mbytes_per_sec": 0, 00:15:42.047 "r_mbytes_per_sec": 0, 00:15:42.047 "w_mbytes_per_sec": 0 00:15:42.047 }, 00:15:42.047 "claimed": false, 00:15:42.047 "zoned": false, 00:15:42.047 "supported_io_types": { 00:15:42.047 "read": true, 00:15:42.047 "write": true, 00:15:42.047 "unmap": true, 00:15:42.047 "flush": true, 00:15:42.047 "reset": true, 00:15:42.047 "nvme_admin": false, 00:15:42.047 "nvme_io": false, 00:15:42.047 "nvme_io_md": false, 00:15:42.047 "write_zeroes": true, 00:15:42.047 "zcopy": true, 00:15:42.047 "get_zone_info": false, 00:15:42.047 "zone_management": false, 00:15:42.047 "zone_append": false, 00:15:42.047 "compare": false, 00:15:42.047 "compare_and_write": false, 00:15:42.047 "abort": true, 00:15:42.047 "seek_hole": false, 00:15:42.047 "seek_data": false, 00:15:42.047 "copy": true, 00:15:42.047 "nvme_iov_md": false 00:15:42.047 }, 00:15:42.047 "memory_domains": [ 00:15:42.047 { 00:15:42.047 "dma_device_id": "system", 00:15:42.047 "dma_device_type": 1 00:15:42.047 }, 00:15:42.048 { 00:15:42.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.048 "dma_device_type": 2 00:15:42.048 } 00:15:42.048 ], 00:15:42.048 "driver_specific": {} 00:15:42.048 } 00:15:42.048 ] 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.048 BaseBdev3 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.048 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.048 [ 00:15:42.048 { 00:15:42.048 "name": "BaseBdev3", 00:15:42.048 "aliases": [ 00:15:42.048 "1e996c35-7898-418e-b815-e1f01eb1e2ac" 00:15:42.048 ], 00:15:42.048 "product_name": "Malloc disk", 00:15:42.048 "block_size": 512, 00:15:42.048 "num_blocks": 65536, 00:15:42.048 "uuid": "1e996c35-7898-418e-b815-e1f01eb1e2ac", 00:15:42.048 "assigned_rate_limits": { 00:15:42.048 "rw_ios_per_sec": 0, 00:15:42.048 "rw_mbytes_per_sec": 0, 00:15:42.048 "r_mbytes_per_sec": 0, 00:15:42.048 "w_mbytes_per_sec": 0 00:15:42.048 }, 00:15:42.048 "claimed": false, 00:15:42.048 "zoned": false, 00:15:42.048 "supported_io_types": { 00:15:42.307 "read": true, 00:15:42.307 "write": true, 00:15:42.307 "unmap": true, 00:15:42.307 "flush": true, 00:15:42.308 "reset": true, 00:15:42.308 "nvme_admin": false, 00:15:42.308 "nvme_io": false, 00:15:42.308 "nvme_io_md": false, 00:15:42.308 "write_zeroes": true, 00:15:42.308 "zcopy": true, 00:15:42.308 "get_zone_info": false, 00:15:42.308 "zone_management": false, 00:15:42.308 "zone_append": false, 00:15:42.308 "compare": false, 00:15:42.308 "compare_and_write": false, 00:15:42.308 "abort": true, 00:15:42.308 "seek_hole": false, 00:15:42.308 "seek_data": false, 00:15:42.308 "copy": true, 00:15:42.308 "nvme_iov_md": false 00:15:42.308 }, 00:15:42.308 "memory_domains": [ 00:15:42.308 { 00:15:42.308 "dma_device_id": "system", 00:15:42.308 "dma_device_type": 1 00:15:42.308 }, 00:15:42.308 { 00:15:42.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.308 "dma_device_type": 2 00:15:42.308 } 00:15:42.308 ], 00:15:42.308 "driver_specific": {} 00:15:42.308 } 00:15:42.308 ] 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.308 BaseBdev4 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.308 [ 00:15:42.308 { 00:15:42.308 "name": "BaseBdev4", 00:15:42.308 "aliases": [ 00:15:42.308 "d8b8497e-25b0-44d5-86fa-cf58e106ee77" 00:15:42.308 ], 00:15:42.308 "product_name": "Malloc disk", 00:15:42.308 "block_size": 512, 00:15:42.308 "num_blocks": 65536, 00:15:42.308 "uuid": "d8b8497e-25b0-44d5-86fa-cf58e106ee77", 00:15:42.308 "assigned_rate_limits": { 00:15:42.308 "rw_ios_per_sec": 0, 00:15:42.308 "rw_mbytes_per_sec": 0, 00:15:42.308 "r_mbytes_per_sec": 0, 00:15:42.308 "w_mbytes_per_sec": 0 00:15:42.308 }, 00:15:42.308 "claimed": false, 00:15:42.308 "zoned": false, 00:15:42.308 "supported_io_types": { 00:15:42.308 "read": true, 00:15:42.308 "write": true, 00:15:42.308 "unmap": true, 00:15:42.308 "flush": true, 00:15:42.308 "reset": true, 00:15:42.308 "nvme_admin": false, 00:15:42.308 "nvme_io": false, 00:15:42.308 "nvme_io_md": false, 00:15:42.308 "write_zeroes": true, 00:15:42.308 "zcopy": true, 00:15:42.308 "get_zone_info": false, 00:15:42.308 "zone_management": false, 00:15:42.308 "zone_append": false, 00:15:42.308 "compare": false, 00:15:42.308 "compare_and_write": false, 00:15:42.308 "abort": true, 00:15:42.308 "seek_hole": false, 00:15:42.308 "seek_data": false, 00:15:42.308 "copy": true, 00:15:42.308 "nvme_iov_md": false 00:15:42.308 }, 00:15:42.308 "memory_domains": [ 00:15:42.308 { 00:15:42.308 "dma_device_id": "system", 00:15:42.308 "dma_device_type": 1 00:15:42.308 }, 00:15:42.308 { 00:15:42.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.308 "dma_device_type": 2 00:15:42.308 } 00:15:42.308 ], 00:15:42.308 "driver_specific": {} 00:15:42.308 } 00:15:42.308 ] 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.308 [2024-11-26 13:28:30.707295] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.308 [2024-11-26 13:28:30.707355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.308 [2024-11-26 13:28:30.707383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.308 [2024-11-26 13:28:30.709804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:42.308 [2024-11-26 13:28:30.709870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.308 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.308 "name": "Existed_Raid", 00:15:42.308 "uuid": "bca92998-9b65-4761-ae10-78965a5573c8", 00:15:42.308 "strip_size_kb": 64, 00:15:42.308 "state": "configuring", 00:15:42.309 "raid_level": "raid5f", 00:15:42.309 "superblock": true, 00:15:42.309 "num_base_bdevs": 4, 00:15:42.309 "num_base_bdevs_discovered": 3, 00:15:42.309 "num_base_bdevs_operational": 4, 00:15:42.309 "base_bdevs_list": [ 00:15:42.309 { 00:15:42.309 "name": "BaseBdev1", 00:15:42.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.309 "is_configured": false, 00:15:42.309 "data_offset": 0, 00:15:42.309 "data_size": 0 00:15:42.309 }, 00:15:42.309 { 00:15:42.309 "name": "BaseBdev2", 00:15:42.309 "uuid": "65ea0a8c-931e-4f36-8f89-26ff610b8594", 00:15:42.309 "is_configured": true, 00:15:42.309 "data_offset": 2048, 00:15:42.309 "data_size": 63488 00:15:42.309 }, 00:15:42.309 { 00:15:42.309 "name": "BaseBdev3", 00:15:42.309 "uuid": "1e996c35-7898-418e-b815-e1f01eb1e2ac", 00:15:42.309 "is_configured": true, 00:15:42.309 "data_offset": 2048, 00:15:42.309 "data_size": 63488 00:15:42.309 }, 00:15:42.309 { 00:15:42.309 "name": "BaseBdev4", 00:15:42.309 "uuid": "d8b8497e-25b0-44d5-86fa-cf58e106ee77", 00:15:42.309 "is_configured": true, 00:15:42.309 "data_offset": 2048, 00:15:42.309 "data_size": 63488 00:15:42.309 } 00:15:42.309 ] 00:15:42.309 }' 00:15:42.309 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.309 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.878 [2024-11-26 13:28:31.239416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.878 "name": "Existed_Raid", 00:15:42.878 "uuid": "bca92998-9b65-4761-ae10-78965a5573c8", 00:15:42.878 "strip_size_kb": 64, 00:15:42.878 "state": "configuring", 00:15:42.878 "raid_level": "raid5f", 00:15:42.878 "superblock": true, 00:15:42.878 "num_base_bdevs": 4, 00:15:42.878 "num_base_bdevs_discovered": 2, 00:15:42.878 "num_base_bdevs_operational": 4, 00:15:42.878 "base_bdevs_list": [ 00:15:42.878 { 00:15:42.878 "name": "BaseBdev1", 00:15:42.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.878 "is_configured": false, 00:15:42.878 "data_offset": 0, 00:15:42.878 "data_size": 0 00:15:42.878 }, 00:15:42.878 { 00:15:42.878 "name": null, 00:15:42.878 "uuid": "65ea0a8c-931e-4f36-8f89-26ff610b8594", 00:15:42.878 "is_configured": false, 00:15:42.878 "data_offset": 0, 00:15:42.878 "data_size": 63488 00:15:42.878 }, 00:15:42.878 { 00:15:42.878 "name": "BaseBdev3", 00:15:42.878 "uuid": "1e996c35-7898-418e-b815-e1f01eb1e2ac", 00:15:42.878 "is_configured": true, 00:15:42.878 "data_offset": 2048, 00:15:42.878 "data_size": 63488 00:15:42.878 }, 00:15:42.878 { 00:15:42.878 "name": "BaseBdev4", 00:15:42.878 "uuid": "d8b8497e-25b0-44d5-86fa-cf58e106ee77", 00:15:42.878 "is_configured": true, 00:15:42.878 "data_offset": 2048, 00:15:42.878 "data_size": 63488 00:15:42.878 } 00:15:42.878 ] 00:15:42.878 }' 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.878 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.447 [2024-11-26 13:28:31.857301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.447 BaseBdev1 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.447 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.447 [ 00:15:43.447 { 00:15:43.447 "name": "BaseBdev1", 00:15:43.447 "aliases": [ 00:15:43.447 "10bad064-3c62-4319-96f8-43cc818d9102" 00:15:43.447 ], 00:15:43.447 "product_name": "Malloc disk", 00:15:43.447 "block_size": 512, 00:15:43.447 "num_blocks": 65536, 00:15:43.447 "uuid": "10bad064-3c62-4319-96f8-43cc818d9102", 00:15:43.447 "assigned_rate_limits": { 00:15:43.447 "rw_ios_per_sec": 0, 00:15:43.447 "rw_mbytes_per_sec": 0, 00:15:43.447 "r_mbytes_per_sec": 0, 00:15:43.447 "w_mbytes_per_sec": 0 00:15:43.447 }, 00:15:43.447 "claimed": true, 00:15:43.447 "claim_type": "exclusive_write", 00:15:43.447 "zoned": false, 00:15:43.447 "supported_io_types": { 00:15:43.447 "read": true, 00:15:43.447 "write": true, 00:15:43.447 "unmap": true, 00:15:43.447 "flush": true, 00:15:43.447 "reset": true, 00:15:43.447 "nvme_admin": false, 00:15:43.447 "nvme_io": false, 00:15:43.447 "nvme_io_md": false, 00:15:43.447 "write_zeroes": true, 00:15:43.447 "zcopy": true, 00:15:43.447 "get_zone_info": false, 00:15:43.447 "zone_management": false, 00:15:43.447 "zone_append": false, 00:15:43.447 "compare": false, 00:15:43.447 "compare_and_write": false, 00:15:43.447 "abort": true, 00:15:43.447 "seek_hole": false, 00:15:43.447 "seek_data": false, 00:15:43.447 "copy": true, 00:15:43.447 "nvme_iov_md": false 00:15:43.447 }, 00:15:43.447 "memory_domains": [ 00:15:43.447 { 00:15:43.447 "dma_device_id": "system", 00:15:43.447 "dma_device_type": 1 00:15:43.447 }, 00:15:43.447 { 00:15:43.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.447 "dma_device_type": 2 00:15:43.447 } 00:15:43.447 ], 00:15:43.447 "driver_specific": {} 00:15:43.447 } 00:15:43.447 ] 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.448 "name": "Existed_Raid", 00:15:43.448 "uuid": "bca92998-9b65-4761-ae10-78965a5573c8", 00:15:43.448 "strip_size_kb": 64, 00:15:43.448 "state": "configuring", 00:15:43.448 "raid_level": "raid5f", 00:15:43.448 "superblock": true, 00:15:43.448 "num_base_bdevs": 4, 00:15:43.448 "num_base_bdevs_discovered": 3, 00:15:43.448 "num_base_bdevs_operational": 4, 00:15:43.448 "base_bdevs_list": [ 00:15:43.448 { 00:15:43.448 "name": "BaseBdev1", 00:15:43.448 "uuid": "10bad064-3c62-4319-96f8-43cc818d9102", 00:15:43.448 "is_configured": true, 00:15:43.448 "data_offset": 2048, 00:15:43.448 "data_size": 63488 00:15:43.448 }, 00:15:43.448 { 00:15:43.448 "name": null, 00:15:43.448 "uuid": "65ea0a8c-931e-4f36-8f89-26ff610b8594", 00:15:43.448 "is_configured": false, 00:15:43.448 "data_offset": 0, 00:15:43.448 "data_size": 63488 00:15:43.448 }, 00:15:43.448 { 00:15:43.448 "name": "BaseBdev3", 00:15:43.448 "uuid": "1e996c35-7898-418e-b815-e1f01eb1e2ac", 00:15:43.448 "is_configured": true, 00:15:43.448 "data_offset": 2048, 00:15:43.448 "data_size": 63488 00:15:43.448 }, 00:15:43.448 { 00:15:43.448 "name": "BaseBdev4", 00:15:43.448 "uuid": "d8b8497e-25b0-44d5-86fa-cf58e106ee77", 00:15:43.448 "is_configured": true, 00:15:43.448 "data_offset": 2048, 00:15:43.448 "data_size": 63488 00:15:43.448 } 00:15:43.448 ] 00:15:43.448 }' 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.448 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.016 [2024-11-26 13:28:32.473553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.016 "name": "Existed_Raid", 00:15:44.016 "uuid": "bca92998-9b65-4761-ae10-78965a5573c8", 00:15:44.016 "strip_size_kb": 64, 00:15:44.016 "state": "configuring", 00:15:44.016 "raid_level": "raid5f", 00:15:44.016 "superblock": true, 00:15:44.016 "num_base_bdevs": 4, 00:15:44.016 "num_base_bdevs_discovered": 2, 00:15:44.016 "num_base_bdevs_operational": 4, 00:15:44.016 "base_bdevs_list": [ 00:15:44.016 { 00:15:44.016 "name": "BaseBdev1", 00:15:44.016 "uuid": "10bad064-3c62-4319-96f8-43cc818d9102", 00:15:44.016 "is_configured": true, 00:15:44.016 "data_offset": 2048, 00:15:44.016 "data_size": 63488 00:15:44.016 }, 00:15:44.016 { 00:15:44.016 "name": null, 00:15:44.016 "uuid": "65ea0a8c-931e-4f36-8f89-26ff610b8594", 00:15:44.016 "is_configured": false, 00:15:44.016 "data_offset": 0, 00:15:44.016 "data_size": 63488 00:15:44.016 }, 00:15:44.016 { 00:15:44.016 "name": null, 00:15:44.016 "uuid": "1e996c35-7898-418e-b815-e1f01eb1e2ac", 00:15:44.016 "is_configured": false, 00:15:44.016 "data_offset": 0, 00:15:44.016 "data_size": 63488 00:15:44.016 }, 00:15:44.016 { 00:15:44.016 "name": "BaseBdev4", 00:15:44.016 "uuid": "d8b8497e-25b0-44d5-86fa-cf58e106ee77", 00:15:44.016 "is_configured": true, 00:15:44.016 "data_offset": 2048, 00:15:44.016 "data_size": 63488 00:15:44.016 } 00:15:44.016 ] 00:15:44.016 }' 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.016 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.585 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.585 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.585 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.585 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.585 [2024-11-26 13:28:33.045698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.585 "name": "Existed_Raid", 00:15:44.585 "uuid": "bca92998-9b65-4761-ae10-78965a5573c8", 00:15:44.585 "strip_size_kb": 64, 00:15:44.585 "state": "configuring", 00:15:44.585 "raid_level": "raid5f", 00:15:44.585 "superblock": true, 00:15:44.585 "num_base_bdevs": 4, 00:15:44.585 "num_base_bdevs_discovered": 3, 00:15:44.585 "num_base_bdevs_operational": 4, 00:15:44.585 "base_bdevs_list": [ 00:15:44.585 { 00:15:44.585 "name": "BaseBdev1", 00:15:44.585 "uuid": "10bad064-3c62-4319-96f8-43cc818d9102", 00:15:44.585 "is_configured": true, 00:15:44.585 "data_offset": 2048, 00:15:44.585 "data_size": 63488 00:15:44.585 }, 00:15:44.585 { 00:15:44.585 "name": null, 00:15:44.585 "uuid": "65ea0a8c-931e-4f36-8f89-26ff610b8594", 00:15:44.585 "is_configured": false, 00:15:44.585 "data_offset": 0, 00:15:44.585 "data_size": 63488 00:15:44.585 }, 00:15:44.585 { 00:15:44.585 "name": "BaseBdev3", 00:15:44.585 "uuid": "1e996c35-7898-418e-b815-e1f01eb1e2ac", 00:15:44.585 "is_configured": true, 00:15:44.585 "data_offset": 2048, 00:15:44.585 "data_size": 63488 00:15:44.585 }, 00:15:44.585 { 00:15:44.585 "name": "BaseBdev4", 00:15:44.585 "uuid": "d8b8497e-25b0-44d5-86fa-cf58e106ee77", 00:15:44.585 "is_configured": true, 00:15:44.585 "data_offset": 2048, 00:15:44.585 "data_size": 63488 00:15:44.585 } 00:15:44.585 ] 00:15:44.585 }' 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.585 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.154 [2024-11-26 13:28:33.625899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.154 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.414 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.414 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.414 "name": "Existed_Raid", 00:15:45.414 "uuid": "bca92998-9b65-4761-ae10-78965a5573c8", 00:15:45.414 "strip_size_kb": 64, 00:15:45.414 "state": "configuring", 00:15:45.414 "raid_level": "raid5f", 00:15:45.414 "superblock": true, 00:15:45.414 "num_base_bdevs": 4, 00:15:45.414 "num_base_bdevs_discovered": 2, 00:15:45.414 "num_base_bdevs_operational": 4, 00:15:45.414 "base_bdevs_list": [ 00:15:45.414 { 00:15:45.414 "name": null, 00:15:45.414 "uuid": "10bad064-3c62-4319-96f8-43cc818d9102", 00:15:45.414 "is_configured": false, 00:15:45.414 "data_offset": 0, 00:15:45.414 "data_size": 63488 00:15:45.414 }, 00:15:45.414 { 00:15:45.414 "name": null, 00:15:45.414 "uuid": "65ea0a8c-931e-4f36-8f89-26ff610b8594", 00:15:45.414 "is_configured": false, 00:15:45.414 "data_offset": 0, 00:15:45.414 "data_size": 63488 00:15:45.414 }, 00:15:45.414 { 00:15:45.414 "name": "BaseBdev3", 00:15:45.414 "uuid": "1e996c35-7898-418e-b815-e1f01eb1e2ac", 00:15:45.414 "is_configured": true, 00:15:45.414 "data_offset": 2048, 00:15:45.414 "data_size": 63488 00:15:45.414 }, 00:15:45.414 { 00:15:45.414 "name": "BaseBdev4", 00:15:45.414 "uuid": "d8b8497e-25b0-44d5-86fa-cf58e106ee77", 00:15:45.414 "is_configured": true, 00:15:45.414 "data_offset": 2048, 00:15:45.414 "data_size": 63488 00:15:45.414 } 00:15:45.414 ] 00:15:45.414 }' 00:15:45.414 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.414 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.673 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.673 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.673 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.673 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:45.673 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.933 [2024-11-26 13:28:34.278068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.933 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.933 "name": "Existed_Raid", 00:15:45.933 "uuid": "bca92998-9b65-4761-ae10-78965a5573c8", 00:15:45.933 "strip_size_kb": 64, 00:15:45.933 "state": "configuring", 00:15:45.933 "raid_level": "raid5f", 00:15:45.933 "superblock": true, 00:15:45.933 "num_base_bdevs": 4, 00:15:45.933 "num_base_bdevs_discovered": 3, 00:15:45.933 "num_base_bdevs_operational": 4, 00:15:45.933 "base_bdevs_list": [ 00:15:45.933 { 00:15:45.933 "name": null, 00:15:45.934 "uuid": "10bad064-3c62-4319-96f8-43cc818d9102", 00:15:45.934 "is_configured": false, 00:15:45.934 "data_offset": 0, 00:15:45.934 "data_size": 63488 00:15:45.934 }, 00:15:45.934 { 00:15:45.934 "name": "BaseBdev2", 00:15:45.934 "uuid": "65ea0a8c-931e-4f36-8f89-26ff610b8594", 00:15:45.934 "is_configured": true, 00:15:45.934 "data_offset": 2048, 00:15:45.934 "data_size": 63488 00:15:45.934 }, 00:15:45.934 { 00:15:45.934 "name": "BaseBdev3", 00:15:45.934 "uuid": "1e996c35-7898-418e-b815-e1f01eb1e2ac", 00:15:45.934 "is_configured": true, 00:15:45.934 "data_offset": 2048, 00:15:45.934 "data_size": 63488 00:15:45.934 }, 00:15:45.934 { 00:15:45.934 "name": "BaseBdev4", 00:15:45.934 "uuid": "d8b8497e-25b0-44d5-86fa-cf58e106ee77", 00:15:45.934 "is_configured": true, 00:15:45.934 "data_offset": 2048, 00:15:45.934 "data_size": 63488 00:15:45.934 } 00:15:45.934 ] 00:15:45.934 }' 00:15:45.934 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.934 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 10bad064-3c62-4319-96f8-43cc818d9102 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.503 [2024-11-26 13:28:34.949475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:46.503 [2024-11-26 13:28:34.949730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:46.503 [2024-11-26 13:28:34.949746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:46.503 NewBaseBdev 00:15:46.503 [2024-11-26 13:28:34.950042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.503 [2024-11-26 13:28:34.955746] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:46.503 [2024-11-26 13:28:34.955939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:46.503 [2024-11-26 13:28:34.956211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.503 [ 00:15:46.503 { 00:15:46.503 "name": "NewBaseBdev", 00:15:46.503 "aliases": [ 00:15:46.503 "10bad064-3c62-4319-96f8-43cc818d9102" 00:15:46.503 ], 00:15:46.503 "product_name": "Malloc disk", 00:15:46.503 "block_size": 512, 00:15:46.503 "num_blocks": 65536, 00:15:46.503 "uuid": "10bad064-3c62-4319-96f8-43cc818d9102", 00:15:46.503 "assigned_rate_limits": { 00:15:46.503 "rw_ios_per_sec": 0, 00:15:46.503 "rw_mbytes_per_sec": 0, 00:15:46.503 "r_mbytes_per_sec": 0, 00:15:46.503 "w_mbytes_per_sec": 0 00:15:46.503 }, 00:15:46.503 "claimed": true, 00:15:46.503 "claim_type": "exclusive_write", 00:15:46.503 "zoned": false, 00:15:46.503 "supported_io_types": { 00:15:46.503 "read": true, 00:15:46.503 "write": true, 00:15:46.503 "unmap": true, 00:15:46.503 "flush": true, 00:15:46.503 "reset": true, 00:15:46.503 "nvme_admin": false, 00:15:46.503 "nvme_io": false, 00:15:46.503 "nvme_io_md": false, 00:15:46.503 "write_zeroes": true, 00:15:46.503 "zcopy": true, 00:15:46.503 "get_zone_info": false, 00:15:46.503 "zone_management": false, 00:15:46.503 "zone_append": false, 00:15:46.503 "compare": false, 00:15:46.503 "compare_and_write": false, 00:15:46.503 "abort": true, 00:15:46.503 "seek_hole": false, 00:15:46.503 "seek_data": false, 00:15:46.503 "copy": true, 00:15:46.503 "nvme_iov_md": false 00:15:46.503 }, 00:15:46.503 "memory_domains": [ 00:15:46.503 { 00:15:46.503 "dma_device_id": "system", 00:15:46.503 "dma_device_type": 1 00:15:46.503 }, 00:15:46.503 { 00:15:46.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.503 "dma_device_type": 2 00:15:46.503 } 00:15:46.503 ], 00:15:46.503 "driver_specific": {} 00:15:46.503 } 00:15:46.503 ] 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.503 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.503 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.503 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.503 "name": "Existed_Raid", 00:15:46.504 "uuid": "bca92998-9b65-4761-ae10-78965a5573c8", 00:15:46.504 "strip_size_kb": 64, 00:15:46.504 "state": "online", 00:15:46.504 "raid_level": "raid5f", 00:15:46.504 "superblock": true, 00:15:46.504 "num_base_bdevs": 4, 00:15:46.504 "num_base_bdevs_discovered": 4, 00:15:46.504 "num_base_bdevs_operational": 4, 00:15:46.504 "base_bdevs_list": [ 00:15:46.504 { 00:15:46.504 "name": "NewBaseBdev", 00:15:46.504 "uuid": "10bad064-3c62-4319-96f8-43cc818d9102", 00:15:46.504 "is_configured": true, 00:15:46.504 "data_offset": 2048, 00:15:46.504 "data_size": 63488 00:15:46.504 }, 00:15:46.504 { 00:15:46.504 "name": "BaseBdev2", 00:15:46.504 "uuid": "65ea0a8c-931e-4f36-8f89-26ff610b8594", 00:15:46.504 "is_configured": true, 00:15:46.504 "data_offset": 2048, 00:15:46.504 "data_size": 63488 00:15:46.504 }, 00:15:46.504 { 00:15:46.504 "name": "BaseBdev3", 00:15:46.504 "uuid": "1e996c35-7898-418e-b815-e1f01eb1e2ac", 00:15:46.504 "is_configured": true, 00:15:46.504 "data_offset": 2048, 00:15:46.504 "data_size": 63488 00:15:46.504 }, 00:15:46.504 { 00:15:46.504 "name": "BaseBdev4", 00:15:46.504 "uuid": "d8b8497e-25b0-44d5-86fa-cf58e106ee77", 00:15:46.504 "is_configured": true, 00:15:46.504 "data_offset": 2048, 00:15:46.504 "data_size": 63488 00:15:46.504 } 00:15:46.504 ] 00:15:46.504 }' 00:15:46.504 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.504 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.071 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:47.071 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:47.071 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:47.071 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:47.071 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:47.071 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:47.071 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:47.071 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:47.071 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.072 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.072 [2024-11-26 13:28:35.522472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.072 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.072 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:47.072 "name": "Existed_Raid", 00:15:47.072 "aliases": [ 00:15:47.072 "bca92998-9b65-4761-ae10-78965a5573c8" 00:15:47.072 ], 00:15:47.072 "product_name": "Raid Volume", 00:15:47.072 "block_size": 512, 00:15:47.072 "num_blocks": 190464, 00:15:47.072 "uuid": "bca92998-9b65-4761-ae10-78965a5573c8", 00:15:47.072 "assigned_rate_limits": { 00:15:47.072 "rw_ios_per_sec": 0, 00:15:47.072 "rw_mbytes_per_sec": 0, 00:15:47.072 "r_mbytes_per_sec": 0, 00:15:47.072 "w_mbytes_per_sec": 0 00:15:47.072 }, 00:15:47.072 "claimed": false, 00:15:47.072 "zoned": false, 00:15:47.072 "supported_io_types": { 00:15:47.072 "read": true, 00:15:47.072 "write": true, 00:15:47.072 "unmap": false, 00:15:47.072 "flush": false, 00:15:47.072 "reset": true, 00:15:47.072 "nvme_admin": false, 00:15:47.072 "nvme_io": false, 00:15:47.072 "nvme_io_md": false, 00:15:47.072 "write_zeroes": true, 00:15:47.072 "zcopy": false, 00:15:47.072 "get_zone_info": false, 00:15:47.072 "zone_management": false, 00:15:47.072 "zone_append": false, 00:15:47.072 "compare": false, 00:15:47.072 "compare_and_write": false, 00:15:47.072 "abort": false, 00:15:47.072 "seek_hole": false, 00:15:47.072 "seek_data": false, 00:15:47.072 "copy": false, 00:15:47.072 "nvme_iov_md": false 00:15:47.072 }, 00:15:47.072 "driver_specific": { 00:15:47.072 "raid": { 00:15:47.072 "uuid": "bca92998-9b65-4761-ae10-78965a5573c8", 00:15:47.072 "strip_size_kb": 64, 00:15:47.072 "state": "online", 00:15:47.072 "raid_level": "raid5f", 00:15:47.072 "superblock": true, 00:15:47.072 "num_base_bdevs": 4, 00:15:47.072 "num_base_bdevs_discovered": 4, 00:15:47.072 "num_base_bdevs_operational": 4, 00:15:47.072 "base_bdevs_list": [ 00:15:47.072 { 00:15:47.072 "name": "NewBaseBdev", 00:15:47.072 "uuid": "10bad064-3c62-4319-96f8-43cc818d9102", 00:15:47.072 "is_configured": true, 00:15:47.072 "data_offset": 2048, 00:15:47.072 "data_size": 63488 00:15:47.072 }, 00:15:47.072 { 00:15:47.072 "name": "BaseBdev2", 00:15:47.072 "uuid": "65ea0a8c-931e-4f36-8f89-26ff610b8594", 00:15:47.072 "is_configured": true, 00:15:47.072 "data_offset": 2048, 00:15:47.072 "data_size": 63488 00:15:47.072 }, 00:15:47.072 { 00:15:47.072 "name": "BaseBdev3", 00:15:47.072 "uuid": "1e996c35-7898-418e-b815-e1f01eb1e2ac", 00:15:47.072 "is_configured": true, 00:15:47.072 "data_offset": 2048, 00:15:47.072 "data_size": 63488 00:15:47.072 }, 00:15:47.072 { 00:15:47.072 "name": "BaseBdev4", 00:15:47.072 "uuid": "d8b8497e-25b0-44d5-86fa-cf58e106ee77", 00:15:47.072 "is_configured": true, 00:15:47.072 "data_offset": 2048, 00:15:47.072 "data_size": 63488 00:15:47.072 } 00:15:47.072 ] 00:15:47.072 } 00:15:47.072 } 00:15:47.072 }' 00:15:47.072 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.072 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:47.072 BaseBdev2 00:15:47.072 BaseBdev3 00:15:47.072 BaseBdev4' 00:15:47.072 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.331 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.590 [2024-11-26 13:28:35.898367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:47.590 [2024-11-26 13:28:35.898395] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.590 [2024-11-26 13:28:35.898463] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.590 [2024-11-26 13:28:35.898798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.590 [2024-11-26 13:28:35.898814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:47.590 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.590 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83082 00:15:47.590 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83082 ']' 00:15:47.590 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83082 00:15:47.590 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:47.591 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.591 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83082 00:15:47.591 killing process with pid 83082 00:15:47.591 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.591 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.591 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83082' 00:15:47.591 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83082 00:15:47.591 [2024-11-26 13:28:35.937332] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.591 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83082 00:15:47.849 [2024-11-26 13:28:36.207555] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.788 ************************************ 00:15:48.788 END TEST raid5f_state_function_test_sb 00:15:48.788 ************************************ 00:15:48.788 13:28:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:48.788 00:15:48.788 real 0m12.445s 00:15:48.788 user 0m20.930s 00:15:48.788 sys 0m1.837s 00:15:48.788 13:28:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.788 13:28:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.788 13:28:37 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:48.788 13:28:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:48.788 13:28:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.788 13:28:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:48.788 ************************************ 00:15:48.788 START TEST raid5f_superblock_test 00:15:48.788 ************************************ 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83760 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83760 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83760 ']' 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.788 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.788 [2024-11-26 13:28:37.220874] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:15:48.788 [2024-11-26 13:28:37.221267] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83760 ] 00:15:49.047 [2024-11-26 13:28:37.391210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.047 [2024-11-26 13:28:37.488191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.306 [2024-11-26 13:28:37.655171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.306 [2024-11-26 13:28:37.655213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.565 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.565 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:49.565 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:49.565 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.565 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:49.565 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:49.565 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:49.565 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.565 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.565 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.565 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:49.565 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.565 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.825 malloc1 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.825 [2024-11-26 13:28:38.143726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:49.825 [2024-11-26 13:28:38.143792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.825 [2024-11-26 13:28:38.143825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:49.825 [2024-11-26 13:28:38.143839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.825 [2024-11-26 13:28:38.146251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.825 [2024-11-26 13:28:38.146287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:49.825 pt1 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.825 malloc2 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.825 [2024-11-26 13:28:38.189120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:49.825 [2024-11-26 13:28:38.189172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.825 [2024-11-26 13:28:38.189198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:49.825 [2024-11-26 13:28:38.189209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.825 [2024-11-26 13:28:38.191636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.825 [2024-11-26 13:28:38.191808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:49.825 pt2 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.825 malloc3 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.825 [2024-11-26 13:28:38.249889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:49.825 [2024-11-26 13:28:38.249944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.825 [2024-11-26 13:28:38.249970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:49.825 [2024-11-26 13:28:38.249983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.825 [2024-11-26 13:28:38.252356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.825 [2024-11-26 13:28:38.252396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:49.825 pt3 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.825 malloc4 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.825 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.825 [2024-11-26 13:28:38.299274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:49.826 [2024-11-26 13:28:38.299468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.826 [2024-11-26 13:28:38.299535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:49.826 [2024-11-26 13:28:38.299645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.826 [2024-11-26 13:28:38.302027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.826 [2024-11-26 13:28:38.302186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:49.826 pt4 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.826 [2024-11-26 13:28:38.311302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:49.826 [2024-11-26 13:28:38.313470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.826 [2024-11-26 13:28:38.313679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:49.826 [2024-11-26 13:28:38.313808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:49.826 [2024-11-26 13:28:38.314083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:49.826 [2024-11-26 13:28:38.314206] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:49.826 [2024-11-26 13:28:38.314631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:49.826 [2024-11-26 13:28:38.320355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:49.826 [2024-11-26 13:28:38.320494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:49.826 [2024-11-26 13:28:38.320815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.826 "name": "raid_bdev1", 00:15:49.826 "uuid": "a8269a04-9475-4232-9975-fe2b4a8dae8c", 00:15:49.826 "strip_size_kb": 64, 00:15:49.826 "state": "online", 00:15:49.826 "raid_level": "raid5f", 00:15:49.826 "superblock": true, 00:15:49.826 "num_base_bdevs": 4, 00:15:49.826 "num_base_bdevs_discovered": 4, 00:15:49.826 "num_base_bdevs_operational": 4, 00:15:49.826 "base_bdevs_list": [ 00:15:49.826 { 00:15:49.826 "name": "pt1", 00:15:49.826 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.826 "is_configured": true, 00:15:49.826 "data_offset": 2048, 00:15:49.826 "data_size": 63488 00:15:49.826 }, 00:15:49.826 { 00:15:49.826 "name": "pt2", 00:15:49.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.826 "is_configured": true, 00:15:49.826 "data_offset": 2048, 00:15:49.826 "data_size": 63488 00:15:49.826 }, 00:15:49.826 { 00:15:49.826 "name": "pt3", 00:15:49.826 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.826 "is_configured": true, 00:15:49.826 "data_offset": 2048, 00:15:49.826 "data_size": 63488 00:15:49.826 }, 00:15:49.826 { 00:15:49.826 "name": "pt4", 00:15:49.826 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:49.826 "is_configured": true, 00:15:49.826 "data_offset": 2048, 00:15:49.826 "data_size": 63488 00:15:49.826 } 00:15:49.826 ] 00:15:49.826 }' 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.826 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.399 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:50.399 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:50.399 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:50.399 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:50.399 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:50.399 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:50.399 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.399 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:50.399 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.399 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.399 [2024-11-26 13:28:38.855788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.399 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.399 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:50.399 "name": "raid_bdev1", 00:15:50.399 "aliases": [ 00:15:50.399 "a8269a04-9475-4232-9975-fe2b4a8dae8c" 00:15:50.399 ], 00:15:50.399 "product_name": "Raid Volume", 00:15:50.399 "block_size": 512, 00:15:50.399 "num_blocks": 190464, 00:15:50.399 "uuid": "a8269a04-9475-4232-9975-fe2b4a8dae8c", 00:15:50.399 "assigned_rate_limits": { 00:15:50.399 "rw_ios_per_sec": 0, 00:15:50.399 "rw_mbytes_per_sec": 0, 00:15:50.399 "r_mbytes_per_sec": 0, 00:15:50.399 "w_mbytes_per_sec": 0 00:15:50.399 }, 00:15:50.399 "claimed": false, 00:15:50.399 "zoned": false, 00:15:50.399 "supported_io_types": { 00:15:50.399 "read": true, 00:15:50.399 "write": true, 00:15:50.399 "unmap": false, 00:15:50.399 "flush": false, 00:15:50.399 "reset": true, 00:15:50.399 "nvme_admin": false, 00:15:50.399 "nvme_io": false, 00:15:50.399 "nvme_io_md": false, 00:15:50.399 "write_zeroes": true, 00:15:50.399 "zcopy": false, 00:15:50.399 "get_zone_info": false, 00:15:50.399 "zone_management": false, 00:15:50.399 "zone_append": false, 00:15:50.399 "compare": false, 00:15:50.399 "compare_and_write": false, 00:15:50.399 "abort": false, 00:15:50.399 "seek_hole": false, 00:15:50.399 "seek_data": false, 00:15:50.399 "copy": false, 00:15:50.399 "nvme_iov_md": false 00:15:50.399 }, 00:15:50.399 "driver_specific": { 00:15:50.399 "raid": { 00:15:50.399 "uuid": "a8269a04-9475-4232-9975-fe2b4a8dae8c", 00:15:50.399 "strip_size_kb": 64, 00:15:50.399 "state": "online", 00:15:50.399 "raid_level": "raid5f", 00:15:50.399 "superblock": true, 00:15:50.399 "num_base_bdevs": 4, 00:15:50.399 "num_base_bdevs_discovered": 4, 00:15:50.399 "num_base_bdevs_operational": 4, 00:15:50.399 "base_bdevs_list": [ 00:15:50.399 { 00:15:50.399 "name": "pt1", 00:15:50.399 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.399 "is_configured": true, 00:15:50.399 "data_offset": 2048, 00:15:50.399 "data_size": 63488 00:15:50.399 }, 00:15:50.399 { 00:15:50.399 "name": "pt2", 00:15:50.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.399 "is_configured": true, 00:15:50.399 "data_offset": 2048, 00:15:50.399 "data_size": 63488 00:15:50.399 }, 00:15:50.399 { 00:15:50.399 "name": "pt3", 00:15:50.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.399 "is_configured": true, 00:15:50.399 "data_offset": 2048, 00:15:50.399 "data_size": 63488 00:15:50.399 }, 00:15:50.400 { 00:15:50.400 "name": "pt4", 00:15:50.400 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:50.400 "is_configured": true, 00:15:50.400 "data_offset": 2048, 00:15:50.400 "data_size": 63488 00:15:50.400 } 00:15:50.400 ] 00:15:50.400 } 00:15:50.400 } 00:15:50.400 }' 00:15:50.400 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.400 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:50.400 pt2 00:15:50.400 pt3 00:15:50.400 pt4' 00:15:50.400 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.660 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.660 [2024-11-26 13:28:39.223837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.919 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.919 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a8269a04-9475-4232-9975-fe2b4a8dae8c 00:15:50.919 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a8269a04-9475-4232-9975-fe2b4a8dae8c ']' 00:15:50.919 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:50.919 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.919 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.919 [2024-11-26 13:28:39.275699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.919 [2024-11-26 13:28:39.275723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.919 [2024-11-26 13:28:39.275791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.919 [2024-11-26 13:28:39.275871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.919 [2024-11-26 13:28:39.275891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:50.919 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.919 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.919 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:50.919 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.919 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.919 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.919 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:50.919 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.920 [2024-11-26 13:28:39.435753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:50.920 [2024-11-26 13:28:39.437804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:50.920 [2024-11-26 13:28:39.437861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:50.920 [2024-11-26 13:28:39.437906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:50.920 [2024-11-26 13:28:39.437964] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:50.920 [2024-11-26 13:28:39.438018] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:50.920 [2024-11-26 13:28:39.438047] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:50.920 [2024-11-26 13:28:39.438074] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:50.920 [2024-11-26 13:28:39.438092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.920 [2024-11-26 13:28:39.438104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:50.920 request: 00:15:50.920 { 00:15:50.920 "name": "raid_bdev1", 00:15:50.920 "raid_level": "raid5f", 00:15:50.920 "base_bdevs": [ 00:15:50.920 "malloc1", 00:15:50.920 "malloc2", 00:15:50.920 "malloc3", 00:15:50.920 "malloc4" 00:15:50.920 ], 00:15:50.920 "strip_size_kb": 64, 00:15:50.920 "superblock": false, 00:15:50.920 "method": "bdev_raid_create", 00:15:50.920 "req_id": 1 00:15:50.920 } 00:15:50.920 Got JSON-RPC error response 00:15:50.920 response: 00:15:50.920 { 00:15:50.920 "code": -17, 00:15:50.920 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:50.920 } 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.920 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.180 [2024-11-26 13:28:39.499760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:51.180 [2024-11-26 13:28:39.499812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.180 [2024-11-26 13:28:39.499830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:51.180 [2024-11-26 13:28:39.499844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.180 [2024-11-26 13:28:39.502243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.180 [2024-11-26 13:28:39.502439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:51.180 [2024-11-26 13:28:39.502528] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:51.180 [2024-11-26 13:28:39.502594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:51.180 pt1 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.180 "name": "raid_bdev1", 00:15:51.180 "uuid": "a8269a04-9475-4232-9975-fe2b4a8dae8c", 00:15:51.180 "strip_size_kb": 64, 00:15:51.180 "state": "configuring", 00:15:51.180 "raid_level": "raid5f", 00:15:51.180 "superblock": true, 00:15:51.180 "num_base_bdevs": 4, 00:15:51.180 "num_base_bdevs_discovered": 1, 00:15:51.180 "num_base_bdevs_operational": 4, 00:15:51.180 "base_bdevs_list": [ 00:15:51.180 { 00:15:51.180 "name": "pt1", 00:15:51.180 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.180 "is_configured": true, 00:15:51.180 "data_offset": 2048, 00:15:51.180 "data_size": 63488 00:15:51.180 }, 00:15:51.180 { 00:15:51.180 "name": null, 00:15:51.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.180 "is_configured": false, 00:15:51.180 "data_offset": 2048, 00:15:51.180 "data_size": 63488 00:15:51.180 }, 00:15:51.180 { 00:15:51.180 "name": null, 00:15:51.180 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.180 "is_configured": false, 00:15:51.180 "data_offset": 2048, 00:15:51.180 "data_size": 63488 00:15:51.180 }, 00:15:51.180 { 00:15:51.180 "name": null, 00:15:51.180 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:51.180 "is_configured": false, 00:15:51.180 "data_offset": 2048, 00:15:51.180 "data_size": 63488 00:15:51.180 } 00:15:51.180 ] 00:15:51.180 }' 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.180 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.748 [2024-11-26 13:28:40.027885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.748 [2024-11-26 13:28:40.027941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.748 [2024-11-26 13:28:40.027961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:51.748 [2024-11-26 13:28:40.027973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.748 [2024-11-26 13:28:40.028385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.748 [2024-11-26 13:28:40.028415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.748 [2024-11-26 13:28:40.028480] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:51.748 [2024-11-26 13:28:40.028510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.748 pt2 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.748 [2024-11-26 13:28:40.035911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.748 "name": "raid_bdev1", 00:15:51.748 "uuid": "a8269a04-9475-4232-9975-fe2b4a8dae8c", 00:15:51.748 "strip_size_kb": 64, 00:15:51.748 "state": "configuring", 00:15:51.748 "raid_level": "raid5f", 00:15:51.748 "superblock": true, 00:15:51.748 "num_base_bdevs": 4, 00:15:51.748 "num_base_bdevs_discovered": 1, 00:15:51.748 "num_base_bdevs_operational": 4, 00:15:51.748 "base_bdevs_list": [ 00:15:51.748 { 00:15:51.748 "name": "pt1", 00:15:51.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.748 "is_configured": true, 00:15:51.748 "data_offset": 2048, 00:15:51.748 "data_size": 63488 00:15:51.748 }, 00:15:51.748 { 00:15:51.748 "name": null, 00:15:51.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.748 "is_configured": false, 00:15:51.748 "data_offset": 0, 00:15:51.748 "data_size": 63488 00:15:51.748 }, 00:15:51.748 { 00:15:51.748 "name": null, 00:15:51.748 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.748 "is_configured": false, 00:15:51.748 "data_offset": 2048, 00:15:51.748 "data_size": 63488 00:15:51.748 }, 00:15:51.748 { 00:15:51.748 "name": null, 00:15:51.748 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:51.748 "is_configured": false, 00:15:51.748 "data_offset": 2048, 00:15:51.748 "data_size": 63488 00:15:51.748 } 00:15:51.748 ] 00:15:51.748 }' 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.748 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.007 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:52.007 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:52.007 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:52.007 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.007 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.007 [2024-11-26 13:28:40.567992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:52.007 [2024-11-26 13:28:40.568038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.007 [2024-11-26 13:28:40.568061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:52.007 [2024-11-26 13:28:40.568072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.007 [2024-11-26 13:28:40.568497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.007 [2024-11-26 13:28:40.568520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:52.007 [2024-11-26 13:28:40.568621] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:52.007 [2024-11-26 13:28:40.568644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:52.266 pt2 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.266 [2024-11-26 13:28:40.580001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:52.266 [2024-11-26 13:28:40.580049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.266 [2024-11-26 13:28:40.580071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:52.266 [2024-11-26 13:28:40.580083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.266 [2024-11-26 13:28:40.580493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.266 [2024-11-26 13:28:40.580517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:52.266 [2024-11-26 13:28:40.580598] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:52.266 [2024-11-26 13:28:40.580622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:52.266 pt3 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.266 [2024-11-26 13:28:40.587978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:52.266 [2024-11-26 13:28:40.588028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.266 [2024-11-26 13:28:40.588051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:52.266 [2024-11-26 13:28:40.588062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.266 [2024-11-26 13:28:40.588467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.266 [2024-11-26 13:28:40.588489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:52.266 [2024-11-26 13:28:40.588604] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:52.266 [2024-11-26 13:28:40.588629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:52.266 [2024-11-26 13:28:40.588787] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:52.266 [2024-11-26 13:28:40.588801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:52.266 [2024-11-26 13:28:40.589049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:52.266 [2024-11-26 13:28:40.594276] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:52.266 [2024-11-26 13:28:40.594302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:52.266 [2024-11-26 13:28:40.594472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.266 pt4 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.266 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.266 "name": "raid_bdev1", 00:15:52.266 "uuid": "a8269a04-9475-4232-9975-fe2b4a8dae8c", 00:15:52.266 "strip_size_kb": 64, 00:15:52.266 "state": "online", 00:15:52.266 "raid_level": "raid5f", 00:15:52.266 "superblock": true, 00:15:52.266 "num_base_bdevs": 4, 00:15:52.266 "num_base_bdevs_discovered": 4, 00:15:52.266 "num_base_bdevs_operational": 4, 00:15:52.266 "base_bdevs_list": [ 00:15:52.266 { 00:15:52.266 "name": "pt1", 00:15:52.266 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:52.266 "is_configured": true, 00:15:52.266 "data_offset": 2048, 00:15:52.266 "data_size": 63488 00:15:52.266 }, 00:15:52.266 { 00:15:52.266 "name": "pt2", 00:15:52.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.266 "is_configured": true, 00:15:52.266 "data_offset": 2048, 00:15:52.266 "data_size": 63488 00:15:52.266 }, 00:15:52.266 { 00:15:52.266 "name": "pt3", 00:15:52.266 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:52.266 "is_configured": true, 00:15:52.266 "data_offset": 2048, 00:15:52.266 "data_size": 63488 00:15:52.266 }, 00:15:52.266 { 00:15:52.266 "name": "pt4", 00:15:52.266 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:52.266 "is_configured": true, 00:15:52.266 "data_offset": 2048, 00:15:52.266 "data_size": 63488 00:15:52.267 } 00:15:52.267 ] 00:15:52.267 }' 00:15:52.267 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.267 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.835 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:52.835 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:52.835 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:52.835 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:52.835 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:52.835 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:52.835 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.835 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.835 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:52.835 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.835 [2024-11-26 13:28:41.132833] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.835 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.835 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:52.835 "name": "raid_bdev1", 00:15:52.835 "aliases": [ 00:15:52.835 "a8269a04-9475-4232-9975-fe2b4a8dae8c" 00:15:52.835 ], 00:15:52.835 "product_name": "Raid Volume", 00:15:52.835 "block_size": 512, 00:15:52.835 "num_blocks": 190464, 00:15:52.835 "uuid": "a8269a04-9475-4232-9975-fe2b4a8dae8c", 00:15:52.835 "assigned_rate_limits": { 00:15:52.835 "rw_ios_per_sec": 0, 00:15:52.835 "rw_mbytes_per_sec": 0, 00:15:52.835 "r_mbytes_per_sec": 0, 00:15:52.835 "w_mbytes_per_sec": 0 00:15:52.835 }, 00:15:52.835 "claimed": false, 00:15:52.835 "zoned": false, 00:15:52.835 "supported_io_types": { 00:15:52.835 "read": true, 00:15:52.835 "write": true, 00:15:52.835 "unmap": false, 00:15:52.835 "flush": false, 00:15:52.835 "reset": true, 00:15:52.835 "nvme_admin": false, 00:15:52.835 "nvme_io": false, 00:15:52.835 "nvme_io_md": false, 00:15:52.835 "write_zeroes": true, 00:15:52.835 "zcopy": false, 00:15:52.835 "get_zone_info": false, 00:15:52.835 "zone_management": false, 00:15:52.835 "zone_append": false, 00:15:52.835 "compare": false, 00:15:52.836 "compare_and_write": false, 00:15:52.836 "abort": false, 00:15:52.836 "seek_hole": false, 00:15:52.836 "seek_data": false, 00:15:52.836 "copy": false, 00:15:52.836 "nvme_iov_md": false 00:15:52.836 }, 00:15:52.836 "driver_specific": { 00:15:52.836 "raid": { 00:15:52.836 "uuid": "a8269a04-9475-4232-9975-fe2b4a8dae8c", 00:15:52.836 "strip_size_kb": 64, 00:15:52.836 "state": "online", 00:15:52.836 "raid_level": "raid5f", 00:15:52.836 "superblock": true, 00:15:52.836 "num_base_bdevs": 4, 00:15:52.836 "num_base_bdevs_discovered": 4, 00:15:52.836 "num_base_bdevs_operational": 4, 00:15:52.836 "base_bdevs_list": [ 00:15:52.836 { 00:15:52.836 "name": "pt1", 00:15:52.836 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:52.836 "is_configured": true, 00:15:52.836 "data_offset": 2048, 00:15:52.836 "data_size": 63488 00:15:52.836 }, 00:15:52.836 { 00:15:52.836 "name": "pt2", 00:15:52.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.836 "is_configured": true, 00:15:52.836 "data_offset": 2048, 00:15:52.836 "data_size": 63488 00:15:52.836 }, 00:15:52.836 { 00:15:52.836 "name": "pt3", 00:15:52.836 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:52.836 "is_configured": true, 00:15:52.836 "data_offset": 2048, 00:15:52.836 "data_size": 63488 00:15:52.836 }, 00:15:52.836 { 00:15:52.836 "name": "pt4", 00:15:52.836 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:52.836 "is_configured": true, 00:15:52.836 "data_offset": 2048, 00:15:52.836 "data_size": 63488 00:15:52.836 } 00:15:52.836 ] 00:15:52.836 } 00:15:52.836 } 00:15:52.836 }' 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:52.836 pt2 00:15:52.836 pt3 00:15:52.836 pt4' 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.836 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.095 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.095 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.095 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.096 [2024-11-26 13:28:41.504915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a8269a04-9475-4232-9975-fe2b4a8dae8c '!=' a8269a04-9475-4232-9975-fe2b4a8dae8c ']' 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.096 [2024-11-26 13:28:41.556811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.096 "name": "raid_bdev1", 00:15:53.096 "uuid": "a8269a04-9475-4232-9975-fe2b4a8dae8c", 00:15:53.096 "strip_size_kb": 64, 00:15:53.096 "state": "online", 00:15:53.096 "raid_level": "raid5f", 00:15:53.096 "superblock": true, 00:15:53.096 "num_base_bdevs": 4, 00:15:53.096 "num_base_bdevs_discovered": 3, 00:15:53.096 "num_base_bdevs_operational": 3, 00:15:53.096 "base_bdevs_list": [ 00:15:53.096 { 00:15:53.096 "name": null, 00:15:53.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.096 "is_configured": false, 00:15:53.096 "data_offset": 0, 00:15:53.096 "data_size": 63488 00:15:53.096 }, 00:15:53.096 { 00:15:53.096 "name": "pt2", 00:15:53.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.096 "is_configured": true, 00:15:53.096 "data_offset": 2048, 00:15:53.096 "data_size": 63488 00:15:53.096 }, 00:15:53.096 { 00:15:53.096 "name": "pt3", 00:15:53.096 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.096 "is_configured": true, 00:15:53.096 "data_offset": 2048, 00:15:53.096 "data_size": 63488 00:15:53.096 }, 00:15:53.096 { 00:15:53.096 "name": "pt4", 00:15:53.096 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:53.096 "is_configured": true, 00:15:53.096 "data_offset": 2048, 00:15:53.096 "data_size": 63488 00:15:53.096 } 00:15:53.096 ] 00:15:53.096 }' 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.096 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.665 [2024-11-26 13:28:42.084902] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.665 [2024-11-26 13:28:42.084929] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.665 [2024-11-26 13:28:42.084980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.665 [2024-11-26 13:28:42.085051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.665 [2024-11-26 13:28:42.085064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.665 [2024-11-26 13:28:42.176910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:53.665 [2024-11-26 13:28:42.176958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.665 [2024-11-26 13:28:42.176980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:53.665 [2024-11-26 13:28:42.176992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.665 [2024-11-26 13:28:42.179418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.665 [2024-11-26 13:28:42.179457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:53.665 [2024-11-26 13:28:42.179530] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:53.665 [2024-11-26 13:28:42.179574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:53.665 pt2 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.665 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.925 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.925 "name": "raid_bdev1", 00:15:53.925 "uuid": "a8269a04-9475-4232-9975-fe2b4a8dae8c", 00:15:53.925 "strip_size_kb": 64, 00:15:53.925 "state": "configuring", 00:15:53.925 "raid_level": "raid5f", 00:15:53.925 "superblock": true, 00:15:53.925 "num_base_bdevs": 4, 00:15:53.925 "num_base_bdevs_discovered": 1, 00:15:53.925 "num_base_bdevs_operational": 3, 00:15:53.925 "base_bdevs_list": [ 00:15:53.925 { 00:15:53.925 "name": null, 00:15:53.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.925 "is_configured": false, 00:15:53.925 "data_offset": 2048, 00:15:53.925 "data_size": 63488 00:15:53.925 }, 00:15:53.925 { 00:15:53.925 "name": "pt2", 00:15:53.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.925 "is_configured": true, 00:15:53.925 "data_offset": 2048, 00:15:53.925 "data_size": 63488 00:15:53.925 }, 00:15:53.925 { 00:15:53.925 "name": null, 00:15:53.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.925 "is_configured": false, 00:15:53.925 "data_offset": 2048, 00:15:53.925 "data_size": 63488 00:15:53.925 }, 00:15:53.925 { 00:15:53.925 "name": null, 00:15:53.925 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:53.925 "is_configured": false, 00:15:53.925 "data_offset": 2048, 00:15:53.925 "data_size": 63488 00:15:53.925 } 00:15:53.925 ] 00:15:53.925 }' 00:15:53.925 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.925 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.184 [2024-11-26 13:28:42.709025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:54.184 [2024-11-26 13:28:42.709074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.184 [2024-11-26 13:28:42.709097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:54.184 [2024-11-26 13:28:42.709108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.184 [2024-11-26 13:28:42.709545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.184 [2024-11-26 13:28:42.709569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:54.184 [2024-11-26 13:28:42.709701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:54.184 [2024-11-26 13:28:42.709732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:54.184 pt3 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.184 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.443 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.443 "name": "raid_bdev1", 00:15:54.443 "uuid": "a8269a04-9475-4232-9975-fe2b4a8dae8c", 00:15:54.443 "strip_size_kb": 64, 00:15:54.443 "state": "configuring", 00:15:54.443 "raid_level": "raid5f", 00:15:54.443 "superblock": true, 00:15:54.443 "num_base_bdevs": 4, 00:15:54.443 "num_base_bdevs_discovered": 2, 00:15:54.443 "num_base_bdevs_operational": 3, 00:15:54.443 "base_bdevs_list": [ 00:15:54.443 { 00:15:54.443 "name": null, 00:15:54.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.443 "is_configured": false, 00:15:54.443 "data_offset": 2048, 00:15:54.443 "data_size": 63488 00:15:54.443 }, 00:15:54.443 { 00:15:54.443 "name": "pt2", 00:15:54.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.443 "is_configured": true, 00:15:54.443 "data_offset": 2048, 00:15:54.443 "data_size": 63488 00:15:54.443 }, 00:15:54.444 { 00:15:54.444 "name": "pt3", 00:15:54.444 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:54.444 "is_configured": true, 00:15:54.444 "data_offset": 2048, 00:15:54.444 "data_size": 63488 00:15:54.444 }, 00:15:54.444 { 00:15:54.444 "name": null, 00:15:54.444 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:54.444 "is_configured": false, 00:15:54.444 "data_offset": 2048, 00:15:54.444 "data_size": 63488 00:15:54.444 } 00:15:54.444 ] 00:15:54.444 }' 00:15:54.444 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.444 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.703 [2024-11-26 13:28:43.237168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:54.703 [2024-11-26 13:28:43.237215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.703 [2024-11-26 13:28:43.237271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:54.703 [2024-11-26 13:28:43.237286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.703 [2024-11-26 13:28:43.237730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.703 [2024-11-26 13:28:43.237765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:54.703 [2024-11-26 13:28:43.237838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:54.703 [2024-11-26 13:28:43.237864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:54.703 [2024-11-26 13:28:43.238020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:54.703 [2024-11-26 13:28:43.238034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:54.703 [2024-11-26 13:28:43.238314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:54.703 [2024-11-26 13:28:43.243643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:54.703 pt4 00:15:54.703 [2024-11-26 13:28:43.243808] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:54.703 [2024-11-26 13:28:43.244101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.703 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.962 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.962 "name": "raid_bdev1", 00:15:54.962 "uuid": "a8269a04-9475-4232-9975-fe2b4a8dae8c", 00:15:54.962 "strip_size_kb": 64, 00:15:54.962 "state": "online", 00:15:54.962 "raid_level": "raid5f", 00:15:54.962 "superblock": true, 00:15:54.962 "num_base_bdevs": 4, 00:15:54.962 "num_base_bdevs_discovered": 3, 00:15:54.962 "num_base_bdevs_operational": 3, 00:15:54.962 "base_bdevs_list": [ 00:15:54.962 { 00:15:54.962 "name": null, 00:15:54.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.962 "is_configured": false, 00:15:54.962 "data_offset": 2048, 00:15:54.962 "data_size": 63488 00:15:54.962 }, 00:15:54.962 { 00:15:54.962 "name": "pt2", 00:15:54.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.962 "is_configured": true, 00:15:54.962 "data_offset": 2048, 00:15:54.962 "data_size": 63488 00:15:54.962 }, 00:15:54.962 { 00:15:54.962 "name": "pt3", 00:15:54.962 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:54.962 "is_configured": true, 00:15:54.962 "data_offset": 2048, 00:15:54.962 "data_size": 63488 00:15:54.962 }, 00:15:54.962 { 00:15:54.962 "name": "pt4", 00:15:54.962 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:54.962 "is_configured": true, 00:15:54.962 "data_offset": 2048, 00:15:54.962 "data_size": 63488 00:15:54.962 } 00:15:54.962 ] 00:15:54.962 }' 00:15:54.963 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.963 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.222 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:55.222 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.222 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.222 [2024-11-26 13:28:43.774361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.222 [2024-11-26 13:28:43.774522] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.222 [2024-11-26 13:28:43.774616] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.222 [2024-11-26 13:28:43.774689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.222 [2024-11-26 13:28:43.774707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:55.222 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.222 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.222 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:55.222 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.222 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.482 [2024-11-26 13:28:43.854518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:55.482 [2024-11-26 13:28:43.854711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.482 [2024-11-26 13:28:43.854804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:55.482 [2024-11-26 13:28:43.854969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.482 [2024-11-26 13:28:43.857338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.482 [2024-11-26 13:28:43.857510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:55.482 [2024-11-26 13:28:43.857601] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:55.482 [2024-11-26 13:28:43.857662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.482 [2024-11-26 13:28:43.857801] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:55.482 [2024-11-26 13:28:43.857822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.482 [2024-11-26 13:28:43.857854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:55.482 [2024-11-26 13:28:43.857914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.482 [2024-11-26 13:28:43.858049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:55.482 pt1 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.482 "name": "raid_bdev1", 00:15:55.482 "uuid": "a8269a04-9475-4232-9975-fe2b4a8dae8c", 00:15:55.482 "strip_size_kb": 64, 00:15:55.482 "state": "configuring", 00:15:55.482 "raid_level": "raid5f", 00:15:55.482 "superblock": true, 00:15:55.482 "num_base_bdevs": 4, 00:15:55.482 "num_base_bdevs_discovered": 2, 00:15:55.482 "num_base_bdevs_operational": 3, 00:15:55.482 "base_bdevs_list": [ 00:15:55.482 { 00:15:55.482 "name": null, 00:15:55.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.482 "is_configured": false, 00:15:55.482 "data_offset": 2048, 00:15:55.482 "data_size": 63488 00:15:55.482 }, 00:15:55.482 { 00:15:55.482 "name": "pt2", 00:15:55.482 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.482 "is_configured": true, 00:15:55.482 "data_offset": 2048, 00:15:55.482 "data_size": 63488 00:15:55.482 }, 00:15:55.482 { 00:15:55.482 "name": "pt3", 00:15:55.482 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.482 "is_configured": true, 00:15:55.482 "data_offset": 2048, 00:15:55.482 "data_size": 63488 00:15:55.482 }, 00:15:55.482 { 00:15:55.482 "name": null, 00:15:55.482 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:55.482 "is_configured": false, 00:15:55.482 "data_offset": 2048, 00:15:55.482 "data_size": 63488 00:15:55.482 } 00:15:55.482 ] 00:15:55.482 }' 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.482 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.051 [2024-11-26 13:28:44.434706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:56.051 [2024-11-26 13:28:44.434918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.051 [2024-11-26 13:28:44.434959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:56.051 [2024-11-26 13:28:44.434974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.051 [2024-11-26 13:28:44.435464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.051 [2024-11-26 13:28:44.435489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:56.051 [2024-11-26 13:28:44.435593] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:56.051 [2024-11-26 13:28:44.435640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:56.051 [2024-11-26 13:28:44.435768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:56.051 [2024-11-26 13:28:44.435782] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:56.051 [2024-11-26 13:28:44.436033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:56.051 [2024-11-26 13:28:44.441604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:56.051 [2024-11-26 13:28:44.441796] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:56.051 [2024-11-26 13:28:44.442085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.051 pt4 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.051 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.052 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.052 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.052 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.052 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.052 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.052 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.052 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.052 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.052 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.052 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.052 "name": "raid_bdev1", 00:15:56.052 "uuid": "a8269a04-9475-4232-9975-fe2b4a8dae8c", 00:15:56.052 "strip_size_kb": 64, 00:15:56.052 "state": "online", 00:15:56.052 "raid_level": "raid5f", 00:15:56.052 "superblock": true, 00:15:56.052 "num_base_bdevs": 4, 00:15:56.052 "num_base_bdevs_discovered": 3, 00:15:56.052 "num_base_bdevs_operational": 3, 00:15:56.052 "base_bdevs_list": [ 00:15:56.052 { 00:15:56.052 "name": null, 00:15:56.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.052 "is_configured": false, 00:15:56.052 "data_offset": 2048, 00:15:56.052 "data_size": 63488 00:15:56.052 }, 00:15:56.052 { 00:15:56.052 "name": "pt2", 00:15:56.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.052 "is_configured": true, 00:15:56.052 "data_offset": 2048, 00:15:56.052 "data_size": 63488 00:15:56.052 }, 00:15:56.052 { 00:15:56.052 "name": "pt3", 00:15:56.052 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.052 "is_configured": true, 00:15:56.052 "data_offset": 2048, 00:15:56.052 "data_size": 63488 00:15:56.052 }, 00:15:56.052 { 00:15:56.052 "name": "pt4", 00:15:56.052 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:56.052 "is_configured": true, 00:15:56.052 "data_offset": 2048, 00:15:56.052 "data_size": 63488 00:15:56.052 } 00:15:56.052 ] 00:15:56.052 }' 00:15:56.052 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.052 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.621 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:56.621 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:56.621 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.621 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.621 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.621 [2024-11-26 13:28:45.028649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a8269a04-9475-4232-9975-fe2b4a8dae8c '!=' a8269a04-9475-4232-9975-fe2b4a8dae8c ']' 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83760 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83760 ']' 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83760 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83760 00:15:56.621 killing process with pid 83760 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83760' 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83760 00:15:56.621 [2024-11-26 13:28:45.106398] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.621 [2024-11-26 13:28:45.106467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.621 13:28:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83760 00:15:56.621 [2024-11-26 13:28:45.106536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.621 [2024-11-26 13:28:45.106554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:56.880 [2024-11-26 13:28:45.373883] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.819 13:28:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:57.819 00:15:57.819 real 0m9.101s 00:15:57.819 user 0m15.218s 00:15:57.819 sys 0m1.297s 00:15:57.819 13:28:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.819 ************************************ 00:15:57.819 END TEST raid5f_superblock_test 00:15:57.819 ************************************ 00:15:57.819 13:28:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.819 13:28:46 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:57.819 13:28:46 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:57.819 13:28:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:57.819 13:28:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.819 13:28:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.819 ************************************ 00:15:57.819 START TEST raid5f_rebuild_test 00:15:57.819 ************************************ 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:57.819 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:57.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.820 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:57.820 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:57.820 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:57.820 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:57.820 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:57.820 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:57.820 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84253 00:15:57.820 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84253 00:15:57.820 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84253 ']' 00:15:57.820 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.820 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.820 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:57.820 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.820 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.820 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.114 [2024-11-26 13:28:46.395702] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:15:58.114 [2024-11-26 13:28:46.396124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84253 ] 00:15:58.114 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:58.114 Zero copy mechanism will not be used. 00:15:58.114 [2024-11-26 13:28:46.572038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.420 [2024-11-26 13:28:46.670174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.420 [2024-11-26 13:28:46.837340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.420 [2024-11-26 13:28:46.837634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.996 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.996 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:58.996 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.996 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:58.996 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.996 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.996 BaseBdev1_malloc 00:15:58.996 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.996 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:58.996 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.996 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.996 [2024-11-26 13:28:47.377642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:58.996 [2024-11-26 13:28:47.377715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.996 [2024-11-26 13:28:47.377743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:58.997 [2024-11-26 13:28:47.377759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.997 [2024-11-26 13:28:47.380051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.997 [2024-11-26 13:28:47.380263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:58.997 BaseBdev1 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.997 BaseBdev2_malloc 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.997 [2024-11-26 13:28:47.420303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:58.997 [2024-11-26 13:28:47.420383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.997 [2024-11-26 13:28:47.420425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:58.997 [2024-11-26 13:28:47.420444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.997 [2024-11-26 13:28:47.423096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.997 [2024-11-26 13:28:47.423157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:58.997 BaseBdev2 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.997 BaseBdev3_malloc 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.997 [2024-11-26 13:28:47.477749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:58.997 [2024-11-26 13:28:47.477807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.997 [2024-11-26 13:28:47.477831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:58.997 [2024-11-26 13:28:47.477846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.997 [2024-11-26 13:28:47.480193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.997 [2024-11-26 13:28:47.480248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:58.997 BaseBdev3 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.997 BaseBdev4_malloc 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.997 [2024-11-26 13:28:47.519344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:58.997 [2024-11-26 13:28:47.519397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.997 [2024-11-26 13:28:47.519422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:58.997 [2024-11-26 13:28:47.519437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.997 [2024-11-26 13:28:47.521770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.997 [2024-11-26 13:28:47.521816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:58.997 BaseBdev4 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.997 spare_malloc 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.997 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.257 spare_delay 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.257 [2024-11-26 13:28:47.572843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:59.257 [2024-11-26 13:28:47.572906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.257 [2024-11-26 13:28:47.572932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:59.257 [2024-11-26 13:28:47.572947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.257 [2024-11-26 13:28:47.575352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.257 [2024-11-26 13:28:47.575396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:59.257 spare 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.257 [2024-11-26 13:28:47.580895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.257 [2024-11-26 13:28:47.583088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.257 [2024-11-26 13:28:47.583324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.257 [2024-11-26 13:28:47.583446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:59.257 [2024-11-26 13:28:47.583640] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:59.257 [2024-11-26 13:28:47.583762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:59.257 [2024-11-26 13:28:47.584132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:59.257 [2024-11-26 13:28:47.589867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:59.257 [2024-11-26 13:28:47.589999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:59.257 [2024-11-26 13:28:47.590415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.257 "name": "raid_bdev1", 00:15:59.257 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:15:59.257 "strip_size_kb": 64, 00:15:59.257 "state": "online", 00:15:59.257 "raid_level": "raid5f", 00:15:59.257 "superblock": false, 00:15:59.257 "num_base_bdevs": 4, 00:15:59.257 "num_base_bdevs_discovered": 4, 00:15:59.257 "num_base_bdevs_operational": 4, 00:15:59.257 "base_bdevs_list": [ 00:15:59.257 { 00:15:59.257 "name": "BaseBdev1", 00:15:59.257 "uuid": "9cefa55b-c27e-5473-84d8-70fba285c270", 00:15:59.257 "is_configured": true, 00:15:59.257 "data_offset": 0, 00:15:59.257 "data_size": 65536 00:15:59.257 }, 00:15:59.257 { 00:15:59.257 "name": "BaseBdev2", 00:15:59.257 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:15:59.257 "is_configured": true, 00:15:59.257 "data_offset": 0, 00:15:59.257 "data_size": 65536 00:15:59.257 }, 00:15:59.257 { 00:15:59.257 "name": "BaseBdev3", 00:15:59.257 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:15:59.257 "is_configured": true, 00:15:59.257 "data_offset": 0, 00:15:59.257 "data_size": 65536 00:15:59.257 }, 00:15:59.257 { 00:15:59.257 "name": "BaseBdev4", 00:15:59.257 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:15:59.257 "is_configured": true, 00:15:59.257 "data_offset": 0, 00:15:59.257 "data_size": 65536 00:15:59.257 } 00:15:59.257 ] 00:15:59.257 }' 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.257 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.826 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.826 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:59.826 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.826 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.826 [2024-11-26 13:28:48.101659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.826 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.826 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:59.826 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.826 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.826 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.826 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:59.826 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.826 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:59.826 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:59.827 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:59.827 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:59.827 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:59.827 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:59.827 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:59.827 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:59.827 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:59.827 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:59.827 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:59.827 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:59.827 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:59.827 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:00.086 [2024-11-26 13:28:48.481564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:00.086 /dev/nbd0 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.086 1+0 records in 00:16:00.086 1+0 records out 00:16:00.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057723 s, 7.1 MB/s 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:00.086 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:00.654 512+0 records in 00:16:00.654 512+0 records out 00:16:00.654 100663296 bytes (101 MB, 96 MiB) copied, 0.592957 s, 170 MB/s 00:16:00.654 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:00.654 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.654 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:00.654 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.654 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:00.654 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.654 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:00.913 [2024-11-26 13:28:49.413211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.913 [2024-11-26 13:28:49.441191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.913 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.172 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.172 "name": "raid_bdev1", 00:16:01.172 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:01.172 "strip_size_kb": 64, 00:16:01.172 "state": "online", 00:16:01.172 "raid_level": "raid5f", 00:16:01.172 "superblock": false, 00:16:01.172 "num_base_bdevs": 4, 00:16:01.172 "num_base_bdevs_discovered": 3, 00:16:01.172 "num_base_bdevs_operational": 3, 00:16:01.172 "base_bdevs_list": [ 00:16:01.172 { 00:16:01.172 "name": null, 00:16:01.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.172 "is_configured": false, 00:16:01.172 "data_offset": 0, 00:16:01.172 "data_size": 65536 00:16:01.172 }, 00:16:01.172 { 00:16:01.172 "name": "BaseBdev2", 00:16:01.172 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:01.172 "is_configured": true, 00:16:01.172 "data_offset": 0, 00:16:01.172 "data_size": 65536 00:16:01.172 }, 00:16:01.172 { 00:16:01.172 "name": "BaseBdev3", 00:16:01.172 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:01.172 "is_configured": true, 00:16:01.172 "data_offset": 0, 00:16:01.172 "data_size": 65536 00:16:01.172 }, 00:16:01.172 { 00:16:01.172 "name": "BaseBdev4", 00:16:01.172 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:01.172 "is_configured": true, 00:16:01.172 "data_offset": 0, 00:16:01.172 "data_size": 65536 00:16:01.172 } 00:16:01.172 ] 00:16:01.172 }' 00:16:01.172 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.172 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.431 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:01.431 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.431 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.431 [2024-11-26 13:28:49.961321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.431 [2024-11-26 13:28:49.973444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:01.431 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.431 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:01.431 [2024-11-26 13:28:49.981282] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:02.808 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.808 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.808 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.808 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.808 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.808 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.808 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.808 13:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.808 13:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.808 13:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.808 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.808 "name": "raid_bdev1", 00:16:02.808 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:02.808 "strip_size_kb": 64, 00:16:02.808 "state": "online", 00:16:02.808 "raid_level": "raid5f", 00:16:02.808 "superblock": false, 00:16:02.808 "num_base_bdevs": 4, 00:16:02.808 "num_base_bdevs_discovered": 4, 00:16:02.808 "num_base_bdevs_operational": 4, 00:16:02.808 "process": { 00:16:02.808 "type": "rebuild", 00:16:02.808 "target": "spare", 00:16:02.808 "progress": { 00:16:02.808 "blocks": 17280, 00:16:02.808 "percent": 8 00:16:02.808 } 00:16:02.808 }, 00:16:02.808 "base_bdevs_list": [ 00:16:02.808 { 00:16:02.808 "name": "spare", 00:16:02.808 "uuid": "5dccaed1-fa81-5c3f-bd25-2cf53fbf4007", 00:16:02.808 "is_configured": true, 00:16:02.808 "data_offset": 0, 00:16:02.808 "data_size": 65536 00:16:02.808 }, 00:16:02.808 { 00:16:02.808 "name": "BaseBdev2", 00:16:02.808 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:02.808 "is_configured": true, 00:16:02.808 "data_offset": 0, 00:16:02.808 "data_size": 65536 00:16:02.808 }, 00:16:02.808 { 00:16:02.808 "name": "BaseBdev3", 00:16:02.808 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:02.808 "is_configured": true, 00:16:02.808 "data_offset": 0, 00:16:02.808 "data_size": 65536 00:16:02.809 }, 00:16:02.809 { 00:16:02.809 "name": "BaseBdev4", 00:16:02.809 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:02.809 "is_configured": true, 00:16:02.809 "data_offset": 0, 00:16:02.809 "data_size": 65536 00:16:02.809 } 00:16:02.809 ] 00:16:02.809 }' 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.809 [2024-11-26 13:28:51.142478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.809 [2024-11-26 13:28:51.189885] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:02.809 [2024-11-26 13:28:51.189954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.809 [2024-11-26 13:28:51.189976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.809 [2024-11-26 13:28:51.189989] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.809 "name": "raid_bdev1", 00:16:02.809 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:02.809 "strip_size_kb": 64, 00:16:02.809 "state": "online", 00:16:02.809 "raid_level": "raid5f", 00:16:02.809 "superblock": false, 00:16:02.809 "num_base_bdevs": 4, 00:16:02.809 "num_base_bdevs_discovered": 3, 00:16:02.809 "num_base_bdevs_operational": 3, 00:16:02.809 "base_bdevs_list": [ 00:16:02.809 { 00:16:02.809 "name": null, 00:16:02.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.809 "is_configured": false, 00:16:02.809 "data_offset": 0, 00:16:02.809 "data_size": 65536 00:16:02.809 }, 00:16:02.809 { 00:16:02.809 "name": "BaseBdev2", 00:16:02.809 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:02.809 "is_configured": true, 00:16:02.809 "data_offset": 0, 00:16:02.809 "data_size": 65536 00:16:02.809 }, 00:16:02.809 { 00:16:02.809 "name": "BaseBdev3", 00:16:02.809 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:02.809 "is_configured": true, 00:16:02.809 "data_offset": 0, 00:16:02.809 "data_size": 65536 00:16:02.809 }, 00:16:02.809 { 00:16:02.809 "name": "BaseBdev4", 00:16:02.809 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:02.809 "is_configured": true, 00:16:02.809 "data_offset": 0, 00:16:02.809 "data_size": 65536 00:16:02.809 } 00:16:02.809 ] 00:16:02.809 }' 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.809 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.377 "name": "raid_bdev1", 00:16:03.377 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:03.377 "strip_size_kb": 64, 00:16:03.377 "state": "online", 00:16:03.377 "raid_level": "raid5f", 00:16:03.377 "superblock": false, 00:16:03.377 "num_base_bdevs": 4, 00:16:03.377 "num_base_bdevs_discovered": 3, 00:16:03.377 "num_base_bdevs_operational": 3, 00:16:03.377 "base_bdevs_list": [ 00:16:03.377 { 00:16:03.377 "name": null, 00:16:03.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.377 "is_configured": false, 00:16:03.377 "data_offset": 0, 00:16:03.377 "data_size": 65536 00:16:03.377 }, 00:16:03.377 { 00:16:03.377 "name": "BaseBdev2", 00:16:03.377 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:03.377 "is_configured": true, 00:16:03.377 "data_offset": 0, 00:16:03.377 "data_size": 65536 00:16:03.377 }, 00:16:03.377 { 00:16:03.377 "name": "BaseBdev3", 00:16:03.377 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:03.377 "is_configured": true, 00:16:03.377 "data_offset": 0, 00:16:03.377 "data_size": 65536 00:16:03.377 }, 00:16:03.377 { 00:16:03.377 "name": "BaseBdev4", 00:16:03.377 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:03.377 "is_configured": true, 00:16:03.377 "data_offset": 0, 00:16:03.377 "data_size": 65536 00:16:03.377 } 00:16:03.377 ] 00:16:03.377 }' 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.377 [2024-11-26 13:28:51.887940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.377 [2024-11-26 13:28:51.898596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.377 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:03.377 [2024-11-26 13:28:51.906193] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:04.753 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.753 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.753 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.753 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.753 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.753 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.754 13:28:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.754 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.754 13:28:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.754 13:28:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.754 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.754 "name": "raid_bdev1", 00:16:04.754 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:04.754 "strip_size_kb": 64, 00:16:04.754 "state": "online", 00:16:04.754 "raid_level": "raid5f", 00:16:04.754 "superblock": false, 00:16:04.754 "num_base_bdevs": 4, 00:16:04.754 "num_base_bdevs_discovered": 4, 00:16:04.754 "num_base_bdevs_operational": 4, 00:16:04.754 "process": { 00:16:04.754 "type": "rebuild", 00:16:04.754 "target": "spare", 00:16:04.754 "progress": { 00:16:04.754 "blocks": 19200, 00:16:04.754 "percent": 9 00:16:04.754 } 00:16:04.754 }, 00:16:04.754 "base_bdevs_list": [ 00:16:04.754 { 00:16:04.754 "name": "spare", 00:16:04.754 "uuid": "5dccaed1-fa81-5c3f-bd25-2cf53fbf4007", 00:16:04.754 "is_configured": true, 00:16:04.754 "data_offset": 0, 00:16:04.754 "data_size": 65536 00:16:04.754 }, 00:16:04.754 { 00:16:04.754 "name": "BaseBdev2", 00:16:04.754 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:04.754 "is_configured": true, 00:16:04.754 "data_offset": 0, 00:16:04.754 "data_size": 65536 00:16:04.754 }, 00:16:04.754 { 00:16:04.754 "name": "BaseBdev3", 00:16:04.754 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:04.754 "is_configured": true, 00:16:04.754 "data_offset": 0, 00:16:04.754 "data_size": 65536 00:16:04.754 }, 00:16:04.754 { 00:16:04.754 "name": "BaseBdev4", 00:16:04.754 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:04.754 "is_configured": true, 00:16:04.754 "data_offset": 0, 00:16:04.754 "data_size": 65536 00:16:04.754 } 00:16:04.754 ] 00:16:04.754 }' 00:16:04.754 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=635 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.754 "name": "raid_bdev1", 00:16:04.754 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:04.754 "strip_size_kb": 64, 00:16:04.754 "state": "online", 00:16:04.754 "raid_level": "raid5f", 00:16:04.754 "superblock": false, 00:16:04.754 "num_base_bdevs": 4, 00:16:04.754 "num_base_bdevs_discovered": 4, 00:16:04.754 "num_base_bdevs_operational": 4, 00:16:04.754 "process": { 00:16:04.754 "type": "rebuild", 00:16:04.754 "target": "spare", 00:16:04.754 "progress": { 00:16:04.754 "blocks": 21120, 00:16:04.754 "percent": 10 00:16:04.754 } 00:16:04.754 }, 00:16:04.754 "base_bdevs_list": [ 00:16:04.754 { 00:16:04.754 "name": "spare", 00:16:04.754 "uuid": "5dccaed1-fa81-5c3f-bd25-2cf53fbf4007", 00:16:04.754 "is_configured": true, 00:16:04.754 "data_offset": 0, 00:16:04.754 "data_size": 65536 00:16:04.754 }, 00:16:04.754 { 00:16:04.754 "name": "BaseBdev2", 00:16:04.754 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:04.754 "is_configured": true, 00:16:04.754 "data_offset": 0, 00:16:04.754 "data_size": 65536 00:16:04.754 }, 00:16:04.754 { 00:16:04.754 "name": "BaseBdev3", 00:16:04.754 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:04.754 "is_configured": true, 00:16:04.754 "data_offset": 0, 00:16:04.754 "data_size": 65536 00:16:04.754 }, 00:16:04.754 { 00:16:04.754 "name": "BaseBdev4", 00:16:04.754 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:04.754 "is_configured": true, 00:16:04.754 "data_offset": 0, 00:16:04.754 "data_size": 65536 00:16:04.754 } 00:16:04.754 ] 00:16:04.754 }' 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.754 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.690 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.690 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.690 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.690 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.690 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.690 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.690 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.690 13:28:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.690 13:28:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.690 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.690 13:28:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.949 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.949 "name": "raid_bdev1", 00:16:05.949 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:05.949 "strip_size_kb": 64, 00:16:05.949 "state": "online", 00:16:05.949 "raid_level": "raid5f", 00:16:05.949 "superblock": false, 00:16:05.949 "num_base_bdevs": 4, 00:16:05.949 "num_base_bdevs_discovered": 4, 00:16:05.949 "num_base_bdevs_operational": 4, 00:16:05.949 "process": { 00:16:05.949 "type": "rebuild", 00:16:05.949 "target": "spare", 00:16:05.949 "progress": { 00:16:05.949 "blocks": 44160, 00:16:05.949 "percent": 22 00:16:05.949 } 00:16:05.949 }, 00:16:05.949 "base_bdevs_list": [ 00:16:05.949 { 00:16:05.949 "name": "spare", 00:16:05.949 "uuid": "5dccaed1-fa81-5c3f-bd25-2cf53fbf4007", 00:16:05.949 "is_configured": true, 00:16:05.949 "data_offset": 0, 00:16:05.949 "data_size": 65536 00:16:05.949 }, 00:16:05.949 { 00:16:05.949 "name": "BaseBdev2", 00:16:05.949 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:05.949 "is_configured": true, 00:16:05.949 "data_offset": 0, 00:16:05.949 "data_size": 65536 00:16:05.949 }, 00:16:05.949 { 00:16:05.949 "name": "BaseBdev3", 00:16:05.949 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:05.949 "is_configured": true, 00:16:05.949 "data_offset": 0, 00:16:05.949 "data_size": 65536 00:16:05.949 }, 00:16:05.949 { 00:16:05.949 "name": "BaseBdev4", 00:16:05.949 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:05.949 "is_configured": true, 00:16:05.949 "data_offset": 0, 00:16:05.949 "data_size": 65536 00:16:05.949 } 00:16:05.949 ] 00:16:05.949 }' 00:16:05.949 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.949 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.949 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.949 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.950 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:06.886 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.886 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.886 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.886 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.886 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.886 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.886 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.886 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.886 13:28:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.886 13:28:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.886 13:28:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.886 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.886 "name": "raid_bdev1", 00:16:06.886 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:06.886 "strip_size_kb": 64, 00:16:06.886 "state": "online", 00:16:06.886 "raid_level": "raid5f", 00:16:06.886 "superblock": false, 00:16:06.886 "num_base_bdevs": 4, 00:16:06.886 "num_base_bdevs_discovered": 4, 00:16:06.886 "num_base_bdevs_operational": 4, 00:16:06.886 "process": { 00:16:06.886 "type": "rebuild", 00:16:06.886 "target": "spare", 00:16:06.886 "progress": { 00:16:06.886 "blocks": 65280, 00:16:06.886 "percent": 33 00:16:06.886 } 00:16:06.886 }, 00:16:06.886 "base_bdevs_list": [ 00:16:06.886 { 00:16:06.886 "name": "spare", 00:16:06.886 "uuid": "5dccaed1-fa81-5c3f-bd25-2cf53fbf4007", 00:16:06.886 "is_configured": true, 00:16:06.886 "data_offset": 0, 00:16:06.886 "data_size": 65536 00:16:06.886 }, 00:16:06.886 { 00:16:06.886 "name": "BaseBdev2", 00:16:06.886 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:06.886 "is_configured": true, 00:16:06.886 "data_offset": 0, 00:16:06.886 "data_size": 65536 00:16:06.886 }, 00:16:06.886 { 00:16:06.886 "name": "BaseBdev3", 00:16:06.886 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:06.886 "is_configured": true, 00:16:06.886 "data_offset": 0, 00:16:06.886 "data_size": 65536 00:16:06.886 }, 00:16:06.886 { 00:16:06.886 "name": "BaseBdev4", 00:16:06.886 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:06.886 "is_configured": true, 00:16:06.886 "data_offset": 0, 00:16:06.886 "data_size": 65536 00:16:06.886 } 00:16:06.886 ] 00:16:06.886 }' 00:16:06.886 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.145 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.145 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.145 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.145 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:08.082 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.082 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.082 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.082 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.082 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.082 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.082 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.082 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.082 13:28:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.082 13:28:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.082 13:28:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.082 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.082 "name": "raid_bdev1", 00:16:08.082 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:08.082 "strip_size_kb": 64, 00:16:08.082 "state": "online", 00:16:08.082 "raid_level": "raid5f", 00:16:08.082 "superblock": false, 00:16:08.082 "num_base_bdevs": 4, 00:16:08.082 "num_base_bdevs_discovered": 4, 00:16:08.082 "num_base_bdevs_operational": 4, 00:16:08.082 "process": { 00:16:08.082 "type": "rebuild", 00:16:08.082 "target": "spare", 00:16:08.082 "progress": { 00:16:08.083 "blocks": 88320, 00:16:08.083 "percent": 44 00:16:08.083 } 00:16:08.083 }, 00:16:08.083 "base_bdevs_list": [ 00:16:08.083 { 00:16:08.083 "name": "spare", 00:16:08.083 "uuid": "5dccaed1-fa81-5c3f-bd25-2cf53fbf4007", 00:16:08.083 "is_configured": true, 00:16:08.083 "data_offset": 0, 00:16:08.083 "data_size": 65536 00:16:08.083 }, 00:16:08.083 { 00:16:08.083 "name": "BaseBdev2", 00:16:08.083 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:08.083 "is_configured": true, 00:16:08.083 "data_offset": 0, 00:16:08.083 "data_size": 65536 00:16:08.083 }, 00:16:08.083 { 00:16:08.083 "name": "BaseBdev3", 00:16:08.083 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:08.083 "is_configured": true, 00:16:08.083 "data_offset": 0, 00:16:08.083 "data_size": 65536 00:16:08.083 }, 00:16:08.083 { 00:16:08.083 "name": "BaseBdev4", 00:16:08.083 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:08.083 "is_configured": true, 00:16:08.083 "data_offset": 0, 00:16:08.083 "data_size": 65536 00:16:08.083 } 00:16:08.083 ] 00:16:08.083 }' 00:16:08.083 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.342 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.342 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.342 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.342 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.280 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.280 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.280 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.280 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.280 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.280 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.280 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.280 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.280 13:28:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.280 13:28:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.280 13:28:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.280 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.280 "name": "raid_bdev1", 00:16:09.280 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:09.280 "strip_size_kb": 64, 00:16:09.280 "state": "online", 00:16:09.280 "raid_level": "raid5f", 00:16:09.280 "superblock": false, 00:16:09.280 "num_base_bdevs": 4, 00:16:09.280 "num_base_bdevs_discovered": 4, 00:16:09.280 "num_base_bdevs_operational": 4, 00:16:09.280 "process": { 00:16:09.280 "type": "rebuild", 00:16:09.280 "target": "spare", 00:16:09.280 "progress": { 00:16:09.280 "blocks": 109440, 00:16:09.280 "percent": 55 00:16:09.280 } 00:16:09.280 }, 00:16:09.280 "base_bdevs_list": [ 00:16:09.280 { 00:16:09.280 "name": "spare", 00:16:09.280 "uuid": "5dccaed1-fa81-5c3f-bd25-2cf53fbf4007", 00:16:09.280 "is_configured": true, 00:16:09.280 "data_offset": 0, 00:16:09.280 "data_size": 65536 00:16:09.280 }, 00:16:09.280 { 00:16:09.280 "name": "BaseBdev2", 00:16:09.280 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:09.280 "is_configured": true, 00:16:09.280 "data_offset": 0, 00:16:09.280 "data_size": 65536 00:16:09.280 }, 00:16:09.280 { 00:16:09.280 "name": "BaseBdev3", 00:16:09.280 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:09.280 "is_configured": true, 00:16:09.280 "data_offset": 0, 00:16:09.280 "data_size": 65536 00:16:09.280 }, 00:16:09.280 { 00:16:09.280 "name": "BaseBdev4", 00:16:09.280 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:09.280 "is_configured": true, 00:16:09.280 "data_offset": 0, 00:16:09.280 "data_size": 65536 00:16:09.280 } 00:16:09.280 ] 00:16:09.280 }' 00:16:09.280 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.280 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.280 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.540 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.540 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.476 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.476 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.476 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.477 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.477 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.477 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.477 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.477 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.477 13:28:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.477 13:28:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.477 13:28:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.477 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.477 "name": "raid_bdev1", 00:16:10.477 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:10.477 "strip_size_kb": 64, 00:16:10.477 "state": "online", 00:16:10.477 "raid_level": "raid5f", 00:16:10.477 "superblock": false, 00:16:10.477 "num_base_bdevs": 4, 00:16:10.477 "num_base_bdevs_discovered": 4, 00:16:10.477 "num_base_bdevs_operational": 4, 00:16:10.477 "process": { 00:16:10.477 "type": "rebuild", 00:16:10.477 "target": "spare", 00:16:10.477 "progress": { 00:16:10.477 "blocks": 132480, 00:16:10.477 "percent": 67 00:16:10.477 } 00:16:10.477 }, 00:16:10.477 "base_bdevs_list": [ 00:16:10.477 { 00:16:10.477 "name": "spare", 00:16:10.477 "uuid": "5dccaed1-fa81-5c3f-bd25-2cf53fbf4007", 00:16:10.477 "is_configured": true, 00:16:10.477 "data_offset": 0, 00:16:10.477 "data_size": 65536 00:16:10.477 }, 00:16:10.477 { 00:16:10.477 "name": "BaseBdev2", 00:16:10.477 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:10.477 "is_configured": true, 00:16:10.477 "data_offset": 0, 00:16:10.477 "data_size": 65536 00:16:10.477 }, 00:16:10.477 { 00:16:10.477 "name": "BaseBdev3", 00:16:10.477 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:10.477 "is_configured": true, 00:16:10.477 "data_offset": 0, 00:16:10.477 "data_size": 65536 00:16:10.477 }, 00:16:10.477 { 00:16:10.477 "name": "BaseBdev4", 00:16:10.477 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:10.477 "is_configured": true, 00:16:10.477 "data_offset": 0, 00:16:10.477 "data_size": 65536 00:16:10.477 } 00:16:10.477 ] 00:16:10.477 }' 00:16:10.477 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.477 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.477 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.477 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.477 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.853 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.853 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.853 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.853 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.853 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.853 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.853 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.853 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.853 13:29:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.853 13:29:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.853 13:29:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.853 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.853 "name": "raid_bdev1", 00:16:11.853 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:11.853 "strip_size_kb": 64, 00:16:11.853 "state": "online", 00:16:11.853 "raid_level": "raid5f", 00:16:11.853 "superblock": false, 00:16:11.853 "num_base_bdevs": 4, 00:16:11.853 "num_base_bdevs_discovered": 4, 00:16:11.853 "num_base_bdevs_operational": 4, 00:16:11.853 "process": { 00:16:11.853 "type": "rebuild", 00:16:11.853 "target": "spare", 00:16:11.853 "progress": { 00:16:11.853 "blocks": 153600, 00:16:11.853 "percent": 78 00:16:11.853 } 00:16:11.853 }, 00:16:11.853 "base_bdevs_list": [ 00:16:11.853 { 00:16:11.853 "name": "spare", 00:16:11.853 "uuid": "5dccaed1-fa81-5c3f-bd25-2cf53fbf4007", 00:16:11.853 "is_configured": true, 00:16:11.853 "data_offset": 0, 00:16:11.853 "data_size": 65536 00:16:11.853 }, 00:16:11.853 { 00:16:11.853 "name": "BaseBdev2", 00:16:11.853 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:11.853 "is_configured": true, 00:16:11.853 "data_offset": 0, 00:16:11.853 "data_size": 65536 00:16:11.853 }, 00:16:11.853 { 00:16:11.853 "name": "BaseBdev3", 00:16:11.853 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:11.854 "is_configured": true, 00:16:11.854 "data_offset": 0, 00:16:11.854 "data_size": 65536 00:16:11.854 }, 00:16:11.854 { 00:16:11.854 "name": "BaseBdev4", 00:16:11.854 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:11.854 "is_configured": true, 00:16:11.854 "data_offset": 0, 00:16:11.854 "data_size": 65536 00:16:11.854 } 00:16:11.854 ] 00:16:11.854 }' 00:16:11.854 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.854 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.854 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.854 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.854 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.792 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.792 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.792 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.792 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.792 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.792 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.792 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.792 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.792 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.792 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.792 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.792 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.792 "name": "raid_bdev1", 00:16:12.792 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:12.792 "strip_size_kb": 64, 00:16:12.792 "state": "online", 00:16:12.792 "raid_level": "raid5f", 00:16:12.792 "superblock": false, 00:16:12.792 "num_base_bdevs": 4, 00:16:12.792 "num_base_bdevs_discovered": 4, 00:16:12.792 "num_base_bdevs_operational": 4, 00:16:12.792 "process": { 00:16:12.792 "type": "rebuild", 00:16:12.792 "target": "spare", 00:16:12.792 "progress": { 00:16:12.792 "blocks": 176640, 00:16:12.792 "percent": 89 00:16:12.792 } 00:16:12.792 }, 00:16:12.792 "base_bdevs_list": [ 00:16:12.792 { 00:16:12.792 "name": "spare", 00:16:12.792 "uuid": "5dccaed1-fa81-5c3f-bd25-2cf53fbf4007", 00:16:12.792 "is_configured": true, 00:16:12.792 "data_offset": 0, 00:16:12.792 "data_size": 65536 00:16:12.792 }, 00:16:12.792 { 00:16:12.792 "name": "BaseBdev2", 00:16:12.792 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:12.792 "is_configured": true, 00:16:12.792 "data_offset": 0, 00:16:12.792 "data_size": 65536 00:16:12.792 }, 00:16:12.792 { 00:16:12.792 "name": "BaseBdev3", 00:16:12.792 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:12.792 "is_configured": true, 00:16:12.792 "data_offset": 0, 00:16:12.792 "data_size": 65536 00:16:12.792 }, 00:16:12.792 { 00:16:12.792 "name": "BaseBdev4", 00:16:12.792 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:12.792 "is_configured": true, 00:16:12.792 "data_offset": 0, 00:16:12.792 "data_size": 65536 00:16:12.792 } 00:16:12.792 ] 00:16:12.792 }' 00:16:12.792 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.792 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.792 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.051 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.051 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.988 [2024-11-26 13:29:02.276352] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:13.989 [2024-11-26 13:29:02.276421] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:13.989 [2024-11-26 13:29:02.276474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.989 "name": "raid_bdev1", 00:16:13.989 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:13.989 "strip_size_kb": 64, 00:16:13.989 "state": "online", 00:16:13.989 "raid_level": "raid5f", 00:16:13.989 "superblock": false, 00:16:13.989 "num_base_bdevs": 4, 00:16:13.989 "num_base_bdevs_discovered": 4, 00:16:13.989 "num_base_bdevs_operational": 4, 00:16:13.989 "base_bdevs_list": [ 00:16:13.989 { 00:16:13.989 "name": "spare", 00:16:13.989 "uuid": "5dccaed1-fa81-5c3f-bd25-2cf53fbf4007", 00:16:13.989 "is_configured": true, 00:16:13.989 "data_offset": 0, 00:16:13.989 "data_size": 65536 00:16:13.989 }, 00:16:13.989 { 00:16:13.989 "name": "BaseBdev2", 00:16:13.989 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:13.989 "is_configured": true, 00:16:13.989 "data_offset": 0, 00:16:13.989 "data_size": 65536 00:16:13.989 }, 00:16:13.989 { 00:16:13.989 "name": "BaseBdev3", 00:16:13.989 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:13.989 "is_configured": true, 00:16:13.989 "data_offset": 0, 00:16:13.989 "data_size": 65536 00:16:13.989 }, 00:16:13.989 { 00:16:13.989 "name": "BaseBdev4", 00:16:13.989 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:13.989 "is_configured": true, 00:16:13.989 "data_offset": 0, 00:16:13.989 "data_size": 65536 00:16:13.989 } 00:16:13.989 ] 00:16:13.989 }' 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.989 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.248 "name": "raid_bdev1", 00:16:14.248 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:14.248 "strip_size_kb": 64, 00:16:14.248 "state": "online", 00:16:14.248 "raid_level": "raid5f", 00:16:14.248 "superblock": false, 00:16:14.248 "num_base_bdevs": 4, 00:16:14.248 "num_base_bdevs_discovered": 4, 00:16:14.248 "num_base_bdevs_operational": 4, 00:16:14.248 "base_bdevs_list": [ 00:16:14.248 { 00:16:14.248 "name": "spare", 00:16:14.248 "uuid": "5dccaed1-fa81-5c3f-bd25-2cf53fbf4007", 00:16:14.248 "is_configured": true, 00:16:14.248 "data_offset": 0, 00:16:14.248 "data_size": 65536 00:16:14.248 }, 00:16:14.248 { 00:16:14.248 "name": "BaseBdev2", 00:16:14.248 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:14.248 "is_configured": true, 00:16:14.248 "data_offset": 0, 00:16:14.248 "data_size": 65536 00:16:14.248 }, 00:16:14.248 { 00:16:14.248 "name": "BaseBdev3", 00:16:14.248 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:14.248 "is_configured": true, 00:16:14.248 "data_offset": 0, 00:16:14.248 "data_size": 65536 00:16:14.248 }, 00:16:14.248 { 00:16:14.248 "name": "BaseBdev4", 00:16:14.248 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:14.248 "is_configured": true, 00:16:14.248 "data_offset": 0, 00:16:14.248 "data_size": 65536 00:16:14.248 } 00:16:14.248 ] 00:16:14.248 }' 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.248 "name": "raid_bdev1", 00:16:14.248 "uuid": "5333b918-623b-40cb-8742-d4ca13be19b6", 00:16:14.248 "strip_size_kb": 64, 00:16:14.248 "state": "online", 00:16:14.248 "raid_level": "raid5f", 00:16:14.248 "superblock": false, 00:16:14.248 "num_base_bdevs": 4, 00:16:14.248 "num_base_bdevs_discovered": 4, 00:16:14.248 "num_base_bdevs_operational": 4, 00:16:14.248 "base_bdevs_list": [ 00:16:14.248 { 00:16:14.248 "name": "spare", 00:16:14.248 "uuid": "5dccaed1-fa81-5c3f-bd25-2cf53fbf4007", 00:16:14.248 "is_configured": true, 00:16:14.248 "data_offset": 0, 00:16:14.248 "data_size": 65536 00:16:14.248 }, 00:16:14.248 { 00:16:14.248 "name": "BaseBdev2", 00:16:14.248 "uuid": "472ca8e5-f4a6-5171-b591-9b1bcfd4d2c7", 00:16:14.248 "is_configured": true, 00:16:14.248 "data_offset": 0, 00:16:14.248 "data_size": 65536 00:16:14.248 }, 00:16:14.248 { 00:16:14.248 "name": "BaseBdev3", 00:16:14.248 "uuid": "fe91c8e5-296c-526c-8a4a-cffd83e3f911", 00:16:14.248 "is_configured": true, 00:16:14.248 "data_offset": 0, 00:16:14.248 "data_size": 65536 00:16:14.248 }, 00:16:14.248 { 00:16:14.248 "name": "BaseBdev4", 00:16:14.248 "uuid": "9961e93f-4aa6-5287-bc38-64894e099a3f", 00:16:14.248 "is_configured": true, 00:16:14.248 "data_offset": 0, 00:16:14.248 "data_size": 65536 00:16:14.248 } 00:16:14.248 ] 00:16:14.248 }' 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.248 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.816 [2024-11-26 13:29:03.194409] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.816 [2024-11-26 13:29:03.194591] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.816 [2024-11-26 13:29:03.194688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.816 [2024-11-26 13:29:03.194790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.816 [2024-11-26 13:29:03.194805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:14.816 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:15.076 /dev/nbd0 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.076 1+0 records in 00:16:15.076 1+0 records out 00:16:15.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277953 s, 14.7 MB/s 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.076 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:15.335 /dev/nbd1 00:16:15.335 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:15.335 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:15.335 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:15.335 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:15.335 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:15.335 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:15.335 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:15.594 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:15.594 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:15.594 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:15.594 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.594 1+0 records in 00:16:15.594 1+0 records out 00:16:15.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367354 s, 11.2 MB/s 00:16:15.594 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.594 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:15.594 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.594 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:15.594 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:15.594 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.594 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.594 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:15.594 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:15.594 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.594 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:15.594 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:15.594 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:15.594 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.594 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:15.853 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:15.853 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:15.853 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:15.853 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.853 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.853 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:15.853 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:15.853 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.853 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.853 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84253 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84253 ']' 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84253 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84253 00:16:16.112 killing process with pid 84253 00:16:16.112 Received shutdown signal, test time was about 60.000000 seconds 00:16:16.112 00:16:16.112 Latency(us) 00:16:16.112 [2024-11-26T13:29:04.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.112 [2024-11-26T13:29:04.682Z] =================================================================================================================== 00:16:16.112 [2024-11-26T13:29:04.682Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84253' 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84253 00:16:16.112 [2024-11-26 13:29:04.649890] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.112 13:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84253 00:16:16.680 [2024-11-26 13:29:04.982479] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.248 13:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:17.248 00:16:17.248 real 0m19.529s 00:16:17.248 user 0m24.356s 00:16:17.248 sys 0m2.209s 00:16:17.248 13:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.248 13:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.248 ************************************ 00:16:17.248 END TEST raid5f_rebuild_test 00:16:17.248 ************************************ 00:16:17.508 13:29:05 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:17.508 13:29:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:17.508 13:29:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.508 13:29:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.508 ************************************ 00:16:17.508 START TEST raid5f_rebuild_test_sb 00:16:17.508 ************************************ 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84765 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84765 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84765 ']' 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.508 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.508 [2024-11-26 13:29:05.979576] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:16:17.508 [2024-11-26 13:29:05.979973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:17.508 Zero copy mechanism will not be used. 00:16:17.508 -allocations --file-prefix=spdk_pid84765 ] 00:16:17.767 [2024-11-26 13:29:06.161161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.767 [2024-11-26 13:29:06.260602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.026 [2024-11-26 13:29:06.427852] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.026 [2024-11-26 13:29:06.428114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.594 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.594 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:18.594 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:18.594 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:18.594 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.594 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.594 BaseBdev1_malloc 00:16:18.594 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.594 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:18.594 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.594 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.594 [2024-11-26 13:29:06.993452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:18.594 [2024-11-26 13:29:06.993736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.594 [2024-11-26 13:29:06.993775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:18.594 [2024-11-26 13:29:06.993794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.594 [2024-11-26 13:29:06.996328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.594 [2024-11-26 13:29:06.996514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:18.594 BaseBdev1 00:16:18.594 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.594 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:18.594 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:18.594 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.595 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.595 BaseBdev2_malloc 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.595 [2024-11-26 13:29:07.039273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:18.595 [2024-11-26 13:29:07.039466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.595 [2024-11-26 13:29:07.039500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:18.595 [2024-11-26 13:29:07.039519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.595 [2024-11-26 13:29:07.041798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.595 [2024-11-26 13:29:07.041839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:18.595 BaseBdev2 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.595 BaseBdev3_malloc 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.595 [2024-11-26 13:29:07.088918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:18.595 [2024-11-26 13:29:07.089025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.595 [2024-11-26 13:29:07.089185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:18.595 [2024-11-26 13:29:07.089357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.595 [2024-11-26 13:29:07.091783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.595 [2024-11-26 13:29:07.091959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:18.595 BaseBdev3 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.595 BaseBdev4_malloc 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.595 [2024-11-26 13:29:07.134635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:18.595 [2024-11-26 13:29:07.134684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.595 [2024-11-26 13:29:07.134706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:18.595 [2024-11-26 13:29:07.134721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.595 [2024-11-26 13:29:07.136965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.595 [2024-11-26 13:29:07.137140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:18.595 BaseBdev4 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.595 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.854 spare_malloc 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.854 spare_delay 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.854 [2024-11-26 13:29:07.192151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:18.854 [2024-11-26 13:29:07.192376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.854 [2024-11-26 13:29:07.192410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:18.854 [2024-11-26 13:29:07.192428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.854 [2024-11-26 13:29:07.194774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.854 spare 00:16:18.854 [2024-11-26 13:29:07.194965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.854 [2024-11-26 13:29:07.200205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.854 [2024-11-26 13:29:07.202358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.854 [2024-11-26 13:29:07.202436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.854 [2024-11-26 13:29:07.202511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:18.854 [2024-11-26 13:29:07.202727] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:18.854 [2024-11-26 13:29:07.202749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:18.854 [2024-11-26 13:29:07.203035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:18.854 [2024-11-26 13:29:07.208657] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:18.854 [2024-11-26 13:29:07.208791] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:18.854 [2024-11-26 13:29:07.209128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.854 "name": "raid_bdev1", 00:16:18.854 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:18.854 "strip_size_kb": 64, 00:16:18.854 "state": "online", 00:16:18.854 "raid_level": "raid5f", 00:16:18.854 "superblock": true, 00:16:18.854 "num_base_bdevs": 4, 00:16:18.854 "num_base_bdevs_discovered": 4, 00:16:18.854 "num_base_bdevs_operational": 4, 00:16:18.854 "base_bdevs_list": [ 00:16:18.854 { 00:16:18.854 "name": "BaseBdev1", 00:16:18.854 "uuid": "c21303d6-6c1a-5987-a961-bfcddd2f8ffd", 00:16:18.854 "is_configured": true, 00:16:18.854 "data_offset": 2048, 00:16:18.854 "data_size": 63488 00:16:18.854 }, 00:16:18.854 { 00:16:18.854 "name": "BaseBdev2", 00:16:18.854 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:18.854 "is_configured": true, 00:16:18.854 "data_offset": 2048, 00:16:18.854 "data_size": 63488 00:16:18.854 }, 00:16:18.854 { 00:16:18.854 "name": "BaseBdev3", 00:16:18.854 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:18.854 "is_configured": true, 00:16:18.854 "data_offset": 2048, 00:16:18.854 "data_size": 63488 00:16:18.854 }, 00:16:18.854 { 00:16:18.854 "name": "BaseBdev4", 00:16:18.854 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:18.854 "is_configured": true, 00:16:18.854 "data_offset": 2048, 00:16:18.854 "data_size": 63488 00:16:18.854 } 00:16:18.854 ] 00:16:18.854 }' 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.854 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.421 [2024-11-26 13:29:07.715831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:19.421 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:19.680 [2024-11-26 13:29:08.091738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:19.680 /dev/nbd0 00:16:19.680 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:19.680 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:19.680 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.681 1+0 records in 00:16:19.681 1+0 records out 00:16:19.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312758 s, 13.1 MB/s 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:19.681 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:20.248 496+0 records in 00:16:20.248 496+0 records out 00:16:20.248 97517568 bytes (98 MB, 93 MiB) copied, 0.507299 s, 192 MB/s 00:16:20.248 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:20.248 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.248 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:20.248 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:20.248 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:20.248 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.248 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:20.507 [2024-11-26 13:29:08.949212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.507 [2024-11-26 13:29:08.976591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.507 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.507 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.507 "name": "raid_bdev1", 00:16:20.507 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:20.507 "strip_size_kb": 64, 00:16:20.507 "state": "online", 00:16:20.507 "raid_level": "raid5f", 00:16:20.507 "superblock": true, 00:16:20.507 "num_base_bdevs": 4, 00:16:20.507 "num_base_bdevs_discovered": 3, 00:16:20.507 "num_base_bdevs_operational": 3, 00:16:20.507 "base_bdevs_list": [ 00:16:20.507 { 00:16:20.507 "name": null, 00:16:20.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.507 "is_configured": false, 00:16:20.507 "data_offset": 0, 00:16:20.507 "data_size": 63488 00:16:20.507 }, 00:16:20.507 { 00:16:20.507 "name": "BaseBdev2", 00:16:20.507 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:20.507 "is_configured": true, 00:16:20.507 "data_offset": 2048, 00:16:20.507 "data_size": 63488 00:16:20.507 }, 00:16:20.507 { 00:16:20.507 "name": "BaseBdev3", 00:16:20.507 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:20.507 "is_configured": true, 00:16:20.507 "data_offset": 2048, 00:16:20.507 "data_size": 63488 00:16:20.507 }, 00:16:20.507 { 00:16:20.507 "name": "BaseBdev4", 00:16:20.507 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:20.507 "is_configured": true, 00:16:20.507 "data_offset": 2048, 00:16:20.507 "data_size": 63488 00:16:20.507 } 00:16:20.507 ] 00:16:20.507 }' 00:16:20.507 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.507 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.074 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:21.074 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.074 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.074 [2024-11-26 13:29:09.464678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.075 [2024-11-26 13:29:09.475926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:21.075 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.075 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:21.075 [2024-11-26 13:29:09.483431] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:22.010 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.010 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.010 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.010 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.010 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.010 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.010 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.010 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.010 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.010 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.010 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.010 "name": "raid_bdev1", 00:16:22.010 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:22.010 "strip_size_kb": 64, 00:16:22.010 "state": "online", 00:16:22.010 "raid_level": "raid5f", 00:16:22.010 "superblock": true, 00:16:22.010 "num_base_bdevs": 4, 00:16:22.010 "num_base_bdevs_discovered": 4, 00:16:22.010 "num_base_bdevs_operational": 4, 00:16:22.010 "process": { 00:16:22.010 "type": "rebuild", 00:16:22.010 "target": "spare", 00:16:22.010 "progress": { 00:16:22.010 "blocks": 17280, 00:16:22.010 "percent": 9 00:16:22.010 } 00:16:22.010 }, 00:16:22.010 "base_bdevs_list": [ 00:16:22.010 { 00:16:22.010 "name": "spare", 00:16:22.010 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:22.010 "is_configured": true, 00:16:22.010 "data_offset": 2048, 00:16:22.010 "data_size": 63488 00:16:22.010 }, 00:16:22.010 { 00:16:22.010 "name": "BaseBdev2", 00:16:22.010 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:22.010 "is_configured": true, 00:16:22.010 "data_offset": 2048, 00:16:22.010 "data_size": 63488 00:16:22.010 }, 00:16:22.010 { 00:16:22.010 "name": "BaseBdev3", 00:16:22.010 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:22.010 "is_configured": true, 00:16:22.010 "data_offset": 2048, 00:16:22.010 "data_size": 63488 00:16:22.010 }, 00:16:22.010 { 00:16:22.010 "name": "BaseBdev4", 00:16:22.010 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:22.010 "is_configured": true, 00:16:22.010 "data_offset": 2048, 00:16:22.010 "data_size": 63488 00:16:22.010 } 00:16:22.010 ] 00:16:22.010 }' 00:16:22.010 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.269 [2024-11-26 13:29:10.652591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.269 [2024-11-26 13:29:10.691981] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:22.269 [2024-11-26 13:29:10.692201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.269 [2024-11-26 13:29:10.692229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.269 [2024-11-26 13:29:10.692286] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.269 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.269 "name": "raid_bdev1", 00:16:22.269 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:22.269 "strip_size_kb": 64, 00:16:22.269 "state": "online", 00:16:22.269 "raid_level": "raid5f", 00:16:22.269 "superblock": true, 00:16:22.269 "num_base_bdevs": 4, 00:16:22.269 "num_base_bdevs_discovered": 3, 00:16:22.269 "num_base_bdevs_operational": 3, 00:16:22.269 "base_bdevs_list": [ 00:16:22.269 { 00:16:22.269 "name": null, 00:16:22.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.269 "is_configured": false, 00:16:22.269 "data_offset": 0, 00:16:22.269 "data_size": 63488 00:16:22.269 }, 00:16:22.269 { 00:16:22.269 "name": "BaseBdev2", 00:16:22.269 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:22.269 "is_configured": true, 00:16:22.269 "data_offset": 2048, 00:16:22.269 "data_size": 63488 00:16:22.269 }, 00:16:22.269 { 00:16:22.269 "name": "BaseBdev3", 00:16:22.269 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:22.269 "is_configured": true, 00:16:22.269 "data_offset": 2048, 00:16:22.269 "data_size": 63488 00:16:22.269 }, 00:16:22.269 { 00:16:22.269 "name": "BaseBdev4", 00:16:22.269 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:22.269 "is_configured": true, 00:16:22.269 "data_offset": 2048, 00:16:22.269 "data_size": 63488 00:16:22.269 } 00:16:22.269 ] 00:16:22.269 }' 00:16:22.270 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.270 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.840 "name": "raid_bdev1", 00:16:22.840 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:22.840 "strip_size_kb": 64, 00:16:22.840 "state": "online", 00:16:22.840 "raid_level": "raid5f", 00:16:22.840 "superblock": true, 00:16:22.840 "num_base_bdevs": 4, 00:16:22.840 "num_base_bdevs_discovered": 3, 00:16:22.840 "num_base_bdevs_operational": 3, 00:16:22.840 "base_bdevs_list": [ 00:16:22.840 { 00:16:22.840 "name": null, 00:16:22.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.840 "is_configured": false, 00:16:22.840 "data_offset": 0, 00:16:22.840 "data_size": 63488 00:16:22.840 }, 00:16:22.840 { 00:16:22.840 "name": "BaseBdev2", 00:16:22.840 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:22.840 "is_configured": true, 00:16:22.840 "data_offset": 2048, 00:16:22.840 "data_size": 63488 00:16:22.840 }, 00:16:22.840 { 00:16:22.840 "name": "BaseBdev3", 00:16:22.840 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:22.840 "is_configured": true, 00:16:22.840 "data_offset": 2048, 00:16:22.840 "data_size": 63488 00:16:22.840 }, 00:16:22.840 { 00:16:22.840 "name": "BaseBdev4", 00:16:22.840 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:22.840 "is_configured": true, 00:16:22.840 "data_offset": 2048, 00:16:22.840 "data_size": 63488 00:16:22.840 } 00:16:22.840 ] 00:16:22.840 }' 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.840 [2024-11-26 13:29:11.389542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:22.840 [2024-11-26 13:29:11.399622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.840 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:23.099 [2024-11-26 13:29:11.406728] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.034 "name": "raid_bdev1", 00:16:24.034 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:24.034 "strip_size_kb": 64, 00:16:24.034 "state": "online", 00:16:24.034 "raid_level": "raid5f", 00:16:24.034 "superblock": true, 00:16:24.034 "num_base_bdevs": 4, 00:16:24.034 "num_base_bdevs_discovered": 4, 00:16:24.034 "num_base_bdevs_operational": 4, 00:16:24.034 "process": { 00:16:24.034 "type": "rebuild", 00:16:24.034 "target": "spare", 00:16:24.034 "progress": { 00:16:24.034 "blocks": 17280, 00:16:24.034 "percent": 9 00:16:24.034 } 00:16:24.034 }, 00:16:24.034 "base_bdevs_list": [ 00:16:24.034 { 00:16:24.034 "name": "spare", 00:16:24.034 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:24.034 "is_configured": true, 00:16:24.034 "data_offset": 2048, 00:16:24.034 "data_size": 63488 00:16:24.034 }, 00:16:24.034 { 00:16:24.034 "name": "BaseBdev2", 00:16:24.034 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:24.034 "is_configured": true, 00:16:24.034 "data_offset": 2048, 00:16:24.034 "data_size": 63488 00:16:24.034 }, 00:16:24.034 { 00:16:24.034 "name": "BaseBdev3", 00:16:24.034 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:24.034 "is_configured": true, 00:16:24.034 "data_offset": 2048, 00:16:24.034 "data_size": 63488 00:16:24.034 }, 00:16:24.034 { 00:16:24.034 "name": "BaseBdev4", 00:16:24.034 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:24.034 "is_configured": true, 00:16:24.034 "data_offset": 2048, 00:16:24.034 "data_size": 63488 00:16:24.034 } 00:16:24.034 ] 00:16:24.034 }' 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:24.034 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=654 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.034 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.314 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.314 "name": "raid_bdev1", 00:16:24.314 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:24.314 "strip_size_kb": 64, 00:16:24.314 "state": "online", 00:16:24.314 "raid_level": "raid5f", 00:16:24.314 "superblock": true, 00:16:24.314 "num_base_bdevs": 4, 00:16:24.314 "num_base_bdevs_discovered": 4, 00:16:24.314 "num_base_bdevs_operational": 4, 00:16:24.314 "process": { 00:16:24.314 "type": "rebuild", 00:16:24.314 "target": "spare", 00:16:24.314 "progress": { 00:16:24.314 "blocks": 21120, 00:16:24.314 "percent": 11 00:16:24.314 } 00:16:24.315 }, 00:16:24.315 "base_bdevs_list": [ 00:16:24.315 { 00:16:24.315 "name": "spare", 00:16:24.315 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:24.315 "is_configured": true, 00:16:24.315 "data_offset": 2048, 00:16:24.315 "data_size": 63488 00:16:24.315 }, 00:16:24.315 { 00:16:24.315 "name": "BaseBdev2", 00:16:24.315 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:24.315 "is_configured": true, 00:16:24.315 "data_offset": 2048, 00:16:24.315 "data_size": 63488 00:16:24.315 }, 00:16:24.315 { 00:16:24.315 "name": "BaseBdev3", 00:16:24.315 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:24.315 "is_configured": true, 00:16:24.315 "data_offset": 2048, 00:16:24.315 "data_size": 63488 00:16:24.315 }, 00:16:24.315 { 00:16:24.315 "name": "BaseBdev4", 00:16:24.315 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:24.315 "is_configured": true, 00:16:24.315 "data_offset": 2048, 00:16:24.315 "data_size": 63488 00:16:24.315 } 00:16:24.315 ] 00:16:24.315 }' 00:16:24.315 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.315 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.315 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.315 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.315 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:25.262 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.262 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.262 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.262 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.262 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.262 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.262 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.262 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.262 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.262 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.262 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.262 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.262 "name": "raid_bdev1", 00:16:25.262 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:25.262 "strip_size_kb": 64, 00:16:25.262 "state": "online", 00:16:25.262 "raid_level": "raid5f", 00:16:25.262 "superblock": true, 00:16:25.262 "num_base_bdevs": 4, 00:16:25.262 "num_base_bdevs_discovered": 4, 00:16:25.262 "num_base_bdevs_operational": 4, 00:16:25.262 "process": { 00:16:25.262 "type": "rebuild", 00:16:25.262 "target": "spare", 00:16:25.262 "progress": { 00:16:25.262 "blocks": 44160, 00:16:25.262 "percent": 23 00:16:25.262 } 00:16:25.262 }, 00:16:25.262 "base_bdevs_list": [ 00:16:25.262 { 00:16:25.262 "name": "spare", 00:16:25.262 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:25.262 "is_configured": true, 00:16:25.262 "data_offset": 2048, 00:16:25.262 "data_size": 63488 00:16:25.262 }, 00:16:25.262 { 00:16:25.262 "name": "BaseBdev2", 00:16:25.262 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:25.262 "is_configured": true, 00:16:25.262 "data_offset": 2048, 00:16:25.262 "data_size": 63488 00:16:25.262 }, 00:16:25.262 { 00:16:25.262 "name": "BaseBdev3", 00:16:25.262 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:25.262 "is_configured": true, 00:16:25.262 "data_offset": 2048, 00:16:25.262 "data_size": 63488 00:16:25.262 }, 00:16:25.262 { 00:16:25.262 "name": "BaseBdev4", 00:16:25.262 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:25.262 "is_configured": true, 00:16:25.262 "data_offset": 2048, 00:16:25.262 "data_size": 63488 00:16:25.262 } 00:16:25.262 ] 00:16:25.262 }' 00:16:25.262 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.521 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.521 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.521 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.521 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.459 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.459 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.459 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.459 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.459 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.459 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.459 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.459 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.459 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.459 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.459 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.459 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.459 "name": "raid_bdev1", 00:16:26.459 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:26.459 "strip_size_kb": 64, 00:16:26.459 "state": "online", 00:16:26.459 "raid_level": "raid5f", 00:16:26.459 "superblock": true, 00:16:26.459 "num_base_bdevs": 4, 00:16:26.459 "num_base_bdevs_discovered": 4, 00:16:26.459 "num_base_bdevs_operational": 4, 00:16:26.459 "process": { 00:16:26.459 "type": "rebuild", 00:16:26.459 "target": "spare", 00:16:26.459 "progress": { 00:16:26.459 "blocks": 65280, 00:16:26.459 "percent": 34 00:16:26.459 } 00:16:26.459 }, 00:16:26.459 "base_bdevs_list": [ 00:16:26.459 { 00:16:26.459 "name": "spare", 00:16:26.459 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:26.459 "is_configured": true, 00:16:26.459 "data_offset": 2048, 00:16:26.459 "data_size": 63488 00:16:26.459 }, 00:16:26.459 { 00:16:26.459 "name": "BaseBdev2", 00:16:26.459 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:26.459 "is_configured": true, 00:16:26.459 "data_offset": 2048, 00:16:26.459 "data_size": 63488 00:16:26.459 }, 00:16:26.459 { 00:16:26.459 "name": "BaseBdev3", 00:16:26.459 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:26.459 "is_configured": true, 00:16:26.459 "data_offset": 2048, 00:16:26.459 "data_size": 63488 00:16:26.459 }, 00:16:26.459 { 00:16:26.459 "name": "BaseBdev4", 00:16:26.459 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:26.459 "is_configured": true, 00:16:26.459 "data_offset": 2048, 00:16:26.459 "data_size": 63488 00:16:26.459 } 00:16:26.459 ] 00:16:26.459 }' 00:16:26.459 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.459 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.459 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.718 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.718 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:27.655 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.656 "name": "raid_bdev1", 00:16:27.656 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:27.656 "strip_size_kb": 64, 00:16:27.656 "state": "online", 00:16:27.656 "raid_level": "raid5f", 00:16:27.656 "superblock": true, 00:16:27.656 "num_base_bdevs": 4, 00:16:27.656 "num_base_bdevs_discovered": 4, 00:16:27.656 "num_base_bdevs_operational": 4, 00:16:27.656 "process": { 00:16:27.656 "type": "rebuild", 00:16:27.656 "target": "spare", 00:16:27.656 "progress": { 00:16:27.656 "blocks": 88320, 00:16:27.656 "percent": 46 00:16:27.656 } 00:16:27.656 }, 00:16:27.656 "base_bdevs_list": [ 00:16:27.656 { 00:16:27.656 "name": "spare", 00:16:27.656 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:27.656 "is_configured": true, 00:16:27.656 "data_offset": 2048, 00:16:27.656 "data_size": 63488 00:16:27.656 }, 00:16:27.656 { 00:16:27.656 "name": "BaseBdev2", 00:16:27.656 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:27.656 "is_configured": true, 00:16:27.656 "data_offset": 2048, 00:16:27.656 "data_size": 63488 00:16:27.656 }, 00:16:27.656 { 00:16:27.656 "name": "BaseBdev3", 00:16:27.656 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:27.656 "is_configured": true, 00:16:27.656 "data_offset": 2048, 00:16:27.656 "data_size": 63488 00:16:27.656 }, 00:16:27.656 { 00:16:27.656 "name": "BaseBdev4", 00:16:27.656 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:27.656 "is_configured": true, 00:16:27.656 "data_offset": 2048, 00:16:27.656 "data_size": 63488 00:16:27.656 } 00:16:27.656 ] 00:16:27.656 }' 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.656 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.063 "name": "raid_bdev1", 00:16:29.063 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:29.063 "strip_size_kb": 64, 00:16:29.063 "state": "online", 00:16:29.063 "raid_level": "raid5f", 00:16:29.063 "superblock": true, 00:16:29.063 "num_base_bdevs": 4, 00:16:29.063 "num_base_bdevs_discovered": 4, 00:16:29.063 "num_base_bdevs_operational": 4, 00:16:29.063 "process": { 00:16:29.063 "type": "rebuild", 00:16:29.063 "target": "spare", 00:16:29.063 "progress": { 00:16:29.063 "blocks": 109440, 00:16:29.063 "percent": 57 00:16:29.063 } 00:16:29.063 }, 00:16:29.063 "base_bdevs_list": [ 00:16:29.063 { 00:16:29.063 "name": "spare", 00:16:29.063 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:29.063 "is_configured": true, 00:16:29.063 "data_offset": 2048, 00:16:29.063 "data_size": 63488 00:16:29.063 }, 00:16:29.063 { 00:16:29.063 "name": "BaseBdev2", 00:16:29.063 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:29.063 "is_configured": true, 00:16:29.063 "data_offset": 2048, 00:16:29.063 "data_size": 63488 00:16:29.063 }, 00:16:29.063 { 00:16:29.063 "name": "BaseBdev3", 00:16:29.063 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:29.063 "is_configured": true, 00:16:29.063 "data_offset": 2048, 00:16:29.063 "data_size": 63488 00:16:29.063 }, 00:16:29.063 { 00:16:29.063 "name": "BaseBdev4", 00:16:29.063 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:29.063 "is_configured": true, 00:16:29.063 "data_offset": 2048, 00:16:29.063 "data_size": 63488 00:16:29.063 } 00:16:29.063 ] 00:16:29.063 }' 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.063 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.000 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.000 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.001 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.001 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.001 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.001 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.001 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.001 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.001 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.001 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.001 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.001 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.001 "name": "raid_bdev1", 00:16:30.001 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:30.001 "strip_size_kb": 64, 00:16:30.001 "state": "online", 00:16:30.001 "raid_level": "raid5f", 00:16:30.001 "superblock": true, 00:16:30.001 "num_base_bdevs": 4, 00:16:30.001 "num_base_bdevs_discovered": 4, 00:16:30.001 "num_base_bdevs_operational": 4, 00:16:30.001 "process": { 00:16:30.001 "type": "rebuild", 00:16:30.001 "target": "spare", 00:16:30.001 "progress": { 00:16:30.001 "blocks": 132480, 00:16:30.001 "percent": 69 00:16:30.001 } 00:16:30.001 }, 00:16:30.001 "base_bdevs_list": [ 00:16:30.001 { 00:16:30.001 "name": "spare", 00:16:30.001 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:30.001 "is_configured": true, 00:16:30.001 "data_offset": 2048, 00:16:30.001 "data_size": 63488 00:16:30.001 }, 00:16:30.001 { 00:16:30.001 "name": "BaseBdev2", 00:16:30.001 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:30.001 "is_configured": true, 00:16:30.001 "data_offset": 2048, 00:16:30.001 "data_size": 63488 00:16:30.001 }, 00:16:30.001 { 00:16:30.001 "name": "BaseBdev3", 00:16:30.001 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:30.001 "is_configured": true, 00:16:30.001 "data_offset": 2048, 00:16:30.001 "data_size": 63488 00:16:30.001 }, 00:16:30.001 { 00:16:30.001 "name": "BaseBdev4", 00:16:30.001 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:30.001 "is_configured": true, 00:16:30.001 "data_offset": 2048, 00:16:30.001 "data_size": 63488 00:16:30.001 } 00:16:30.001 ] 00:16:30.001 }' 00:16:30.001 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.001 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.001 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.001 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.001 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.379 "name": "raid_bdev1", 00:16:31.379 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:31.379 "strip_size_kb": 64, 00:16:31.379 "state": "online", 00:16:31.379 "raid_level": "raid5f", 00:16:31.379 "superblock": true, 00:16:31.379 "num_base_bdevs": 4, 00:16:31.379 "num_base_bdevs_discovered": 4, 00:16:31.379 "num_base_bdevs_operational": 4, 00:16:31.379 "process": { 00:16:31.379 "type": "rebuild", 00:16:31.379 "target": "spare", 00:16:31.379 "progress": { 00:16:31.379 "blocks": 153600, 00:16:31.379 "percent": 80 00:16:31.379 } 00:16:31.379 }, 00:16:31.379 "base_bdevs_list": [ 00:16:31.379 { 00:16:31.379 "name": "spare", 00:16:31.379 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:31.379 "is_configured": true, 00:16:31.379 "data_offset": 2048, 00:16:31.379 "data_size": 63488 00:16:31.379 }, 00:16:31.379 { 00:16:31.379 "name": "BaseBdev2", 00:16:31.379 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:31.379 "is_configured": true, 00:16:31.379 "data_offset": 2048, 00:16:31.379 "data_size": 63488 00:16:31.379 }, 00:16:31.379 { 00:16:31.379 "name": "BaseBdev3", 00:16:31.379 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:31.379 "is_configured": true, 00:16:31.379 "data_offset": 2048, 00:16:31.379 "data_size": 63488 00:16:31.379 }, 00:16:31.379 { 00:16:31.379 "name": "BaseBdev4", 00:16:31.379 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:31.379 "is_configured": true, 00:16:31.379 "data_offset": 2048, 00:16:31.379 "data_size": 63488 00:16:31.379 } 00:16:31.379 ] 00:16:31.379 }' 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.379 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.317 "name": "raid_bdev1", 00:16:32.317 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:32.317 "strip_size_kb": 64, 00:16:32.317 "state": "online", 00:16:32.317 "raid_level": "raid5f", 00:16:32.317 "superblock": true, 00:16:32.317 "num_base_bdevs": 4, 00:16:32.317 "num_base_bdevs_discovered": 4, 00:16:32.317 "num_base_bdevs_operational": 4, 00:16:32.317 "process": { 00:16:32.317 "type": "rebuild", 00:16:32.317 "target": "spare", 00:16:32.317 "progress": { 00:16:32.317 "blocks": 176640, 00:16:32.317 "percent": 92 00:16:32.317 } 00:16:32.317 }, 00:16:32.317 "base_bdevs_list": [ 00:16:32.317 { 00:16:32.317 "name": "spare", 00:16:32.317 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:32.317 "is_configured": true, 00:16:32.317 "data_offset": 2048, 00:16:32.317 "data_size": 63488 00:16:32.317 }, 00:16:32.317 { 00:16:32.317 "name": "BaseBdev2", 00:16:32.317 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:32.317 "is_configured": true, 00:16:32.317 "data_offset": 2048, 00:16:32.317 "data_size": 63488 00:16:32.317 }, 00:16:32.317 { 00:16:32.317 "name": "BaseBdev3", 00:16:32.317 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:32.317 "is_configured": true, 00:16:32.317 "data_offset": 2048, 00:16:32.317 "data_size": 63488 00:16:32.317 }, 00:16:32.317 { 00:16:32.317 "name": "BaseBdev4", 00:16:32.317 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:32.317 "is_configured": true, 00:16:32.317 "data_offset": 2048, 00:16:32.317 "data_size": 63488 00:16:32.317 } 00:16:32.317 ] 00:16:32.317 }' 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.317 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.254 [2024-11-26 13:29:21.474839] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:33.254 [2024-11-26 13:29:21.474916] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:33.254 [2024-11-26 13:29:21.475047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.514 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.514 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.514 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.514 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.514 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.514 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.514 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.514 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.514 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.514 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.514 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.514 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.514 "name": "raid_bdev1", 00:16:33.514 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:33.514 "strip_size_kb": 64, 00:16:33.514 "state": "online", 00:16:33.514 "raid_level": "raid5f", 00:16:33.514 "superblock": true, 00:16:33.514 "num_base_bdevs": 4, 00:16:33.514 "num_base_bdevs_discovered": 4, 00:16:33.514 "num_base_bdevs_operational": 4, 00:16:33.514 "base_bdevs_list": [ 00:16:33.514 { 00:16:33.514 "name": "spare", 00:16:33.514 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:33.514 "is_configured": true, 00:16:33.514 "data_offset": 2048, 00:16:33.514 "data_size": 63488 00:16:33.514 }, 00:16:33.514 { 00:16:33.514 "name": "BaseBdev2", 00:16:33.514 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:33.514 "is_configured": true, 00:16:33.514 "data_offset": 2048, 00:16:33.514 "data_size": 63488 00:16:33.514 }, 00:16:33.514 { 00:16:33.514 "name": "BaseBdev3", 00:16:33.514 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:33.514 "is_configured": true, 00:16:33.514 "data_offset": 2048, 00:16:33.514 "data_size": 63488 00:16:33.514 }, 00:16:33.514 { 00:16:33.514 "name": "BaseBdev4", 00:16:33.514 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:33.514 "is_configured": true, 00:16:33.514 "data_offset": 2048, 00:16:33.514 "data_size": 63488 00:16:33.514 } 00:16:33.514 ] 00:16:33.514 }' 00:16:33.514 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.514 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:33.514 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.514 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:33.514 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:33.514 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.514 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.514 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.514 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.514 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.514 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.514 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.514 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.514 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.514 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.773 "name": "raid_bdev1", 00:16:33.773 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:33.773 "strip_size_kb": 64, 00:16:33.773 "state": "online", 00:16:33.773 "raid_level": "raid5f", 00:16:33.773 "superblock": true, 00:16:33.773 "num_base_bdevs": 4, 00:16:33.773 "num_base_bdevs_discovered": 4, 00:16:33.773 "num_base_bdevs_operational": 4, 00:16:33.773 "base_bdevs_list": [ 00:16:33.773 { 00:16:33.773 "name": "spare", 00:16:33.773 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:33.773 "is_configured": true, 00:16:33.773 "data_offset": 2048, 00:16:33.773 "data_size": 63488 00:16:33.773 }, 00:16:33.773 { 00:16:33.773 "name": "BaseBdev2", 00:16:33.773 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:33.773 "is_configured": true, 00:16:33.773 "data_offset": 2048, 00:16:33.773 "data_size": 63488 00:16:33.773 }, 00:16:33.773 { 00:16:33.773 "name": "BaseBdev3", 00:16:33.773 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:33.773 "is_configured": true, 00:16:33.773 "data_offset": 2048, 00:16:33.773 "data_size": 63488 00:16:33.773 }, 00:16:33.773 { 00:16:33.773 "name": "BaseBdev4", 00:16:33.773 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:33.773 "is_configured": true, 00:16:33.773 "data_offset": 2048, 00:16:33.773 "data_size": 63488 00:16:33.773 } 00:16:33.773 ] 00:16:33.773 }' 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.773 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.774 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.774 "name": "raid_bdev1", 00:16:33.774 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:33.774 "strip_size_kb": 64, 00:16:33.774 "state": "online", 00:16:33.774 "raid_level": "raid5f", 00:16:33.774 "superblock": true, 00:16:33.774 "num_base_bdevs": 4, 00:16:33.774 "num_base_bdevs_discovered": 4, 00:16:33.774 "num_base_bdevs_operational": 4, 00:16:33.774 "base_bdevs_list": [ 00:16:33.774 { 00:16:33.774 "name": "spare", 00:16:33.774 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:33.774 "is_configured": true, 00:16:33.774 "data_offset": 2048, 00:16:33.774 "data_size": 63488 00:16:33.774 }, 00:16:33.774 { 00:16:33.774 "name": "BaseBdev2", 00:16:33.774 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:33.774 "is_configured": true, 00:16:33.774 "data_offset": 2048, 00:16:33.774 "data_size": 63488 00:16:33.774 }, 00:16:33.774 { 00:16:33.774 "name": "BaseBdev3", 00:16:33.774 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:33.774 "is_configured": true, 00:16:33.774 "data_offset": 2048, 00:16:33.774 "data_size": 63488 00:16:33.774 }, 00:16:33.774 { 00:16:33.774 "name": "BaseBdev4", 00:16:33.774 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:33.774 "is_configured": true, 00:16:33.774 "data_offset": 2048, 00:16:33.774 "data_size": 63488 00:16:33.774 } 00:16:33.774 ] 00:16:33.774 }' 00:16:33.774 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.774 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.341 [2024-11-26 13:29:22.728664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.341 [2024-11-26 13:29:22.728697] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:34.341 [2024-11-26 13:29:22.728767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.341 [2024-11-26 13:29:22.728859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.341 [2024-11-26 13:29:22.728882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.341 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:34.600 /dev/nbd0 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.600 1+0 records in 00:16:34.600 1+0 records out 00:16:34.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335629 s, 12.2 MB/s 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.600 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:34.860 /dev/nbd1 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.860 1+0 records in 00:16:34.860 1+0 records out 00:16:34.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362129 s, 11.3 MB/s 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.860 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:35.119 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:35.120 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:35.120 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:35.120 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:35.120 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:35.120 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:35.120 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:35.378 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:35.378 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:35.378 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:35.378 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:35.378 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:35.378 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:35.378 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:35.378 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:35.378 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:35.378 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:35.638 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:35.638 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:35.638 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:35.638 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:35.638 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:35.638 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:35.638 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:35.638 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:35.638 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:35.638 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:35.638 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.638 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.638 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.638 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:35.638 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.638 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.638 [2024-11-26 13:29:24.023988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:35.638 [2024-11-26 13:29:24.024048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.638 [2024-11-26 13:29:24.024081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:35.638 [2024-11-26 13:29:24.024094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.638 [2024-11-26 13:29:24.026453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.638 [2024-11-26 13:29:24.026504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:35.638 [2024-11-26 13:29:24.026595] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:35.639 [2024-11-26 13:29:24.026653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.639 [2024-11-26 13:29:24.026799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.639 [2024-11-26 13:29:24.026942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:35.639 [2024-11-26 13:29:24.027049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:35.639 spare 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.639 [2024-11-26 13:29:24.127177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:35.639 [2024-11-26 13:29:24.127213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:35.639 [2024-11-26 13:29:24.127487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:35.639 [2024-11-26 13:29:24.132616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:35.639 [2024-11-26 13:29:24.132640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:35.639 [2024-11-26 13:29:24.132822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.639 "name": "raid_bdev1", 00:16:35.639 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:35.639 "strip_size_kb": 64, 00:16:35.639 "state": "online", 00:16:35.639 "raid_level": "raid5f", 00:16:35.639 "superblock": true, 00:16:35.639 "num_base_bdevs": 4, 00:16:35.639 "num_base_bdevs_discovered": 4, 00:16:35.639 "num_base_bdevs_operational": 4, 00:16:35.639 "base_bdevs_list": [ 00:16:35.639 { 00:16:35.639 "name": "spare", 00:16:35.639 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:35.639 "is_configured": true, 00:16:35.639 "data_offset": 2048, 00:16:35.639 "data_size": 63488 00:16:35.639 }, 00:16:35.639 { 00:16:35.639 "name": "BaseBdev2", 00:16:35.639 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:35.639 "is_configured": true, 00:16:35.639 "data_offset": 2048, 00:16:35.639 "data_size": 63488 00:16:35.639 }, 00:16:35.639 { 00:16:35.639 "name": "BaseBdev3", 00:16:35.639 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:35.639 "is_configured": true, 00:16:35.639 "data_offset": 2048, 00:16:35.639 "data_size": 63488 00:16:35.639 }, 00:16:35.639 { 00:16:35.639 "name": "BaseBdev4", 00:16:35.639 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:35.639 "is_configured": true, 00:16:35.639 "data_offset": 2048, 00:16:35.639 "data_size": 63488 00:16:35.639 } 00:16:35.639 ] 00:16:35.639 }' 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.639 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.208 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:36.208 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.208 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:36.208 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:36.208 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.208 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.208 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.208 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.208 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.208 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.208 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.208 "name": "raid_bdev1", 00:16:36.208 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:36.208 "strip_size_kb": 64, 00:16:36.208 "state": "online", 00:16:36.208 "raid_level": "raid5f", 00:16:36.208 "superblock": true, 00:16:36.208 "num_base_bdevs": 4, 00:16:36.208 "num_base_bdevs_discovered": 4, 00:16:36.208 "num_base_bdevs_operational": 4, 00:16:36.208 "base_bdevs_list": [ 00:16:36.208 { 00:16:36.208 "name": "spare", 00:16:36.208 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:36.208 "is_configured": true, 00:16:36.208 "data_offset": 2048, 00:16:36.208 "data_size": 63488 00:16:36.208 }, 00:16:36.208 { 00:16:36.208 "name": "BaseBdev2", 00:16:36.208 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:36.208 "is_configured": true, 00:16:36.208 "data_offset": 2048, 00:16:36.208 "data_size": 63488 00:16:36.208 }, 00:16:36.208 { 00:16:36.208 "name": "BaseBdev3", 00:16:36.208 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:36.208 "is_configured": true, 00:16:36.208 "data_offset": 2048, 00:16:36.208 "data_size": 63488 00:16:36.208 }, 00:16:36.208 { 00:16:36.208 "name": "BaseBdev4", 00:16:36.208 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:36.208 "is_configured": true, 00:16:36.208 "data_offset": 2048, 00:16:36.208 "data_size": 63488 00:16:36.208 } 00:16:36.208 ] 00:16:36.208 }' 00:16:36.208 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.208 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:36.208 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.467 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.468 [2024-11-26 13:29:24.839007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.468 "name": "raid_bdev1", 00:16:36.468 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:36.468 "strip_size_kb": 64, 00:16:36.468 "state": "online", 00:16:36.468 "raid_level": "raid5f", 00:16:36.468 "superblock": true, 00:16:36.468 "num_base_bdevs": 4, 00:16:36.468 "num_base_bdevs_discovered": 3, 00:16:36.468 "num_base_bdevs_operational": 3, 00:16:36.468 "base_bdevs_list": [ 00:16:36.468 { 00:16:36.468 "name": null, 00:16:36.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.468 "is_configured": false, 00:16:36.468 "data_offset": 0, 00:16:36.468 "data_size": 63488 00:16:36.468 }, 00:16:36.468 { 00:16:36.468 "name": "BaseBdev2", 00:16:36.468 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:36.468 "is_configured": true, 00:16:36.468 "data_offset": 2048, 00:16:36.468 "data_size": 63488 00:16:36.468 }, 00:16:36.468 { 00:16:36.468 "name": "BaseBdev3", 00:16:36.468 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:36.468 "is_configured": true, 00:16:36.468 "data_offset": 2048, 00:16:36.468 "data_size": 63488 00:16:36.468 }, 00:16:36.468 { 00:16:36.468 "name": "BaseBdev4", 00:16:36.468 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:36.468 "is_configured": true, 00:16:36.468 "data_offset": 2048, 00:16:36.468 "data_size": 63488 00:16:36.468 } 00:16:36.468 ] 00:16:36.468 }' 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.468 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.036 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:37.036 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.036 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.036 [2024-11-26 13:29:25.363144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.036 [2024-11-26 13:29:25.363335] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:37.036 [2024-11-26 13:29:25.363364] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:37.036 [2024-11-26 13:29:25.363399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.036 [2024-11-26 13:29:25.374195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:16:37.036 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.036 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:37.036 [2024-11-26 13:29:25.381296] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:37.973 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.973 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.973 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.973 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.973 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.973 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.973 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.973 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.973 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.973 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.973 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.973 "name": "raid_bdev1", 00:16:37.973 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:37.973 "strip_size_kb": 64, 00:16:37.973 "state": "online", 00:16:37.973 "raid_level": "raid5f", 00:16:37.973 "superblock": true, 00:16:37.973 "num_base_bdevs": 4, 00:16:37.973 "num_base_bdevs_discovered": 4, 00:16:37.973 "num_base_bdevs_operational": 4, 00:16:37.973 "process": { 00:16:37.973 "type": "rebuild", 00:16:37.973 "target": "spare", 00:16:37.973 "progress": { 00:16:37.973 "blocks": 19200, 00:16:37.973 "percent": 10 00:16:37.973 } 00:16:37.973 }, 00:16:37.973 "base_bdevs_list": [ 00:16:37.973 { 00:16:37.973 "name": "spare", 00:16:37.973 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:37.973 "is_configured": true, 00:16:37.973 "data_offset": 2048, 00:16:37.973 "data_size": 63488 00:16:37.973 }, 00:16:37.973 { 00:16:37.973 "name": "BaseBdev2", 00:16:37.973 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:37.973 "is_configured": true, 00:16:37.973 "data_offset": 2048, 00:16:37.973 "data_size": 63488 00:16:37.973 }, 00:16:37.973 { 00:16:37.973 "name": "BaseBdev3", 00:16:37.973 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:37.973 "is_configured": true, 00:16:37.973 "data_offset": 2048, 00:16:37.973 "data_size": 63488 00:16:37.973 }, 00:16:37.973 { 00:16:37.973 "name": "BaseBdev4", 00:16:37.973 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:37.973 "is_configured": true, 00:16:37.973 "data_offset": 2048, 00:16:37.973 "data_size": 63488 00:16:37.973 } 00:16:37.973 ] 00:16:37.973 }' 00:16:37.973 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.973 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.973 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.233 [2024-11-26 13:29:26.550403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.233 [2024-11-26 13:29:26.589684] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:38.233 [2024-11-26 13:29:26.589752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.233 [2024-11-26 13:29:26.589773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.233 [2024-11-26 13:29:26.589786] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.233 "name": "raid_bdev1", 00:16:38.233 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:38.233 "strip_size_kb": 64, 00:16:38.233 "state": "online", 00:16:38.233 "raid_level": "raid5f", 00:16:38.233 "superblock": true, 00:16:38.233 "num_base_bdevs": 4, 00:16:38.233 "num_base_bdevs_discovered": 3, 00:16:38.233 "num_base_bdevs_operational": 3, 00:16:38.233 "base_bdevs_list": [ 00:16:38.233 { 00:16:38.233 "name": null, 00:16:38.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.233 "is_configured": false, 00:16:38.233 "data_offset": 0, 00:16:38.233 "data_size": 63488 00:16:38.233 }, 00:16:38.233 { 00:16:38.233 "name": "BaseBdev2", 00:16:38.233 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:38.233 "is_configured": true, 00:16:38.233 "data_offset": 2048, 00:16:38.233 "data_size": 63488 00:16:38.233 }, 00:16:38.233 { 00:16:38.233 "name": "BaseBdev3", 00:16:38.233 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:38.233 "is_configured": true, 00:16:38.233 "data_offset": 2048, 00:16:38.233 "data_size": 63488 00:16:38.233 }, 00:16:38.233 { 00:16:38.233 "name": "BaseBdev4", 00:16:38.233 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:38.233 "is_configured": true, 00:16:38.233 "data_offset": 2048, 00:16:38.233 "data_size": 63488 00:16:38.233 } 00:16:38.233 ] 00:16:38.233 }' 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.233 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.801 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:38.801 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.802 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.802 [2024-11-26 13:29:27.150937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:38.802 [2024-11-26 13:29:27.150993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.802 [2024-11-26 13:29:27.151026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:38.802 [2024-11-26 13:29:27.151043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.802 [2024-11-26 13:29:27.151591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.802 [2024-11-26 13:29:27.151632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:38.802 [2024-11-26 13:29:27.151719] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:38.802 [2024-11-26 13:29:27.151741] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:38.802 [2024-11-26 13:29:27.151752] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:38.802 [2024-11-26 13:29:27.151784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:38.802 [2024-11-26 13:29:27.161713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:16:38.802 spare 00:16:38.802 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.802 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:38.802 [2024-11-26 13:29:27.168396] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:39.739 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.739 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.739 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.739 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.739 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.739 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.739 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.739 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.739 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.739 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.739 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.740 "name": "raid_bdev1", 00:16:39.740 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:39.740 "strip_size_kb": 64, 00:16:39.740 "state": "online", 00:16:39.740 "raid_level": "raid5f", 00:16:39.740 "superblock": true, 00:16:39.740 "num_base_bdevs": 4, 00:16:39.740 "num_base_bdevs_discovered": 4, 00:16:39.740 "num_base_bdevs_operational": 4, 00:16:39.740 "process": { 00:16:39.740 "type": "rebuild", 00:16:39.740 "target": "spare", 00:16:39.740 "progress": { 00:16:39.740 "blocks": 19200, 00:16:39.740 "percent": 10 00:16:39.740 } 00:16:39.740 }, 00:16:39.740 "base_bdevs_list": [ 00:16:39.740 { 00:16:39.740 "name": "spare", 00:16:39.740 "uuid": "9dda028a-a601-501a-9d5e-9501a67918d3", 00:16:39.740 "is_configured": true, 00:16:39.740 "data_offset": 2048, 00:16:39.740 "data_size": 63488 00:16:39.740 }, 00:16:39.740 { 00:16:39.740 "name": "BaseBdev2", 00:16:39.740 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:39.740 "is_configured": true, 00:16:39.740 "data_offset": 2048, 00:16:39.740 "data_size": 63488 00:16:39.740 }, 00:16:39.740 { 00:16:39.740 "name": "BaseBdev3", 00:16:39.740 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:39.740 "is_configured": true, 00:16:39.740 "data_offset": 2048, 00:16:39.740 "data_size": 63488 00:16:39.740 }, 00:16:39.740 { 00:16:39.740 "name": "BaseBdev4", 00:16:39.740 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:39.740 "is_configured": true, 00:16:39.740 "data_offset": 2048, 00:16:39.740 "data_size": 63488 00:16:39.740 } 00:16:39.740 ] 00:16:39.740 }' 00:16:39.740 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.740 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.740 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.999 [2024-11-26 13:29:28.337444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.999 [2024-11-26 13:29:28.376624] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:39.999 [2024-11-26 13:29:28.376680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.999 [2024-11-26 13:29:28.376704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.999 [2024-11-26 13:29:28.376714] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.999 "name": "raid_bdev1", 00:16:39.999 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:39.999 "strip_size_kb": 64, 00:16:39.999 "state": "online", 00:16:39.999 "raid_level": "raid5f", 00:16:39.999 "superblock": true, 00:16:39.999 "num_base_bdevs": 4, 00:16:39.999 "num_base_bdevs_discovered": 3, 00:16:39.999 "num_base_bdevs_operational": 3, 00:16:39.999 "base_bdevs_list": [ 00:16:39.999 { 00:16:39.999 "name": null, 00:16:39.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.999 "is_configured": false, 00:16:39.999 "data_offset": 0, 00:16:39.999 "data_size": 63488 00:16:39.999 }, 00:16:39.999 { 00:16:39.999 "name": "BaseBdev2", 00:16:39.999 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:39.999 "is_configured": true, 00:16:39.999 "data_offset": 2048, 00:16:39.999 "data_size": 63488 00:16:39.999 }, 00:16:39.999 { 00:16:39.999 "name": "BaseBdev3", 00:16:39.999 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:39.999 "is_configured": true, 00:16:39.999 "data_offset": 2048, 00:16:39.999 "data_size": 63488 00:16:39.999 }, 00:16:39.999 { 00:16:39.999 "name": "BaseBdev4", 00:16:39.999 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:39.999 "is_configured": true, 00:16:39.999 "data_offset": 2048, 00:16:39.999 "data_size": 63488 00:16:39.999 } 00:16:39.999 ] 00:16:39.999 }' 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.999 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.567 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.567 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.567 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.567 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.567 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.567 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.567 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.567 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.567 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.567 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.567 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.567 "name": "raid_bdev1", 00:16:40.567 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:40.567 "strip_size_kb": 64, 00:16:40.567 "state": "online", 00:16:40.567 "raid_level": "raid5f", 00:16:40.567 "superblock": true, 00:16:40.567 "num_base_bdevs": 4, 00:16:40.567 "num_base_bdevs_discovered": 3, 00:16:40.567 "num_base_bdevs_operational": 3, 00:16:40.567 "base_bdevs_list": [ 00:16:40.567 { 00:16:40.567 "name": null, 00:16:40.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.568 "is_configured": false, 00:16:40.568 "data_offset": 0, 00:16:40.568 "data_size": 63488 00:16:40.568 }, 00:16:40.568 { 00:16:40.568 "name": "BaseBdev2", 00:16:40.568 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:40.568 "is_configured": true, 00:16:40.568 "data_offset": 2048, 00:16:40.568 "data_size": 63488 00:16:40.568 }, 00:16:40.568 { 00:16:40.568 "name": "BaseBdev3", 00:16:40.568 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:40.568 "is_configured": true, 00:16:40.568 "data_offset": 2048, 00:16:40.568 "data_size": 63488 00:16:40.568 }, 00:16:40.568 { 00:16:40.568 "name": "BaseBdev4", 00:16:40.568 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:40.568 "is_configured": true, 00:16:40.568 "data_offset": 2048, 00:16:40.568 "data_size": 63488 00:16:40.568 } 00:16:40.568 ] 00:16:40.568 }' 00:16:40.568 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.568 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:40.568 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.568 13:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:40.568 13:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:40.568 13:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.568 13:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.568 13:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.568 13:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:40.568 13:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.568 13:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.568 [2024-11-26 13:29:29.053466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:40.568 [2024-11-26 13:29:29.053520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.568 [2024-11-26 13:29:29.053546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:40.568 [2024-11-26 13:29:29.053559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.568 [2024-11-26 13:29:29.053998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.568 [2024-11-26 13:29:29.054036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:40.568 [2024-11-26 13:29:29.054118] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:40.568 [2024-11-26 13:29:29.054135] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:40.568 [2024-11-26 13:29:29.054148] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:40.568 [2024-11-26 13:29:29.054158] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:40.568 BaseBdev1 00:16:40.568 13:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.568 13:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:41.503 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:41.503 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.503 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.503 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.503 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.503 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.503 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.503 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.503 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.503 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.762 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.762 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.762 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.762 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.762 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.762 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.762 "name": "raid_bdev1", 00:16:41.762 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:41.762 "strip_size_kb": 64, 00:16:41.762 "state": "online", 00:16:41.762 "raid_level": "raid5f", 00:16:41.762 "superblock": true, 00:16:41.762 "num_base_bdevs": 4, 00:16:41.762 "num_base_bdevs_discovered": 3, 00:16:41.762 "num_base_bdevs_operational": 3, 00:16:41.762 "base_bdevs_list": [ 00:16:41.762 { 00:16:41.762 "name": null, 00:16:41.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.762 "is_configured": false, 00:16:41.762 "data_offset": 0, 00:16:41.762 "data_size": 63488 00:16:41.762 }, 00:16:41.762 { 00:16:41.762 "name": "BaseBdev2", 00:16:41.762 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:41.762 "is_configured": true, 00:16:41.762 "data_offset": 2048, 00:16:41.762 "data_size": 63488 00:16:41.762 }, 00:16:41.762 { 00:16:41.762 "name": "BaseBdev3", 00:16:41.762 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:41.762 "is_configured": true, 00:16:41.762 "data_offset": 2048, 00:16:41.762 "data_size": 63488 00:16:41.762 }, 00:16:41.762 { 00:16:41.762 "name": "BaseBdev4", 00:16:41.762 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:41.762 "is_configured": true, 00:16:41.762 "data_offset": 2048, 00:16:41.762 "data_size": 63488 00:16:41.762 } 00:16:41.762 ] 00:16:41.762 }' 00:16:41.762 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.762 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.021 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.021 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.021 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.021 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.021 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.021 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.021 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.280 "name": "raid_bdev1", 00:16:42.280 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:42.280 "strip_size_kb": 64, 00:16:42.280 "state": "online", 00:16:42.280 "raid_level": "raid5f", 00:16:42.280 "superblock": true, 00:16:42.280 "num_base_bdevs": 4, 00:16:42.280 "num_base_bdevs_discovered": 3, 00:16:42.280 "num_base_bdevs_operational": 3, 00:16:42.280 "base_bdevs_list": [ 00:16:42.280 { 00:16:42.280 "name": null, 00:16:42.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.280 "is_configured": false, 00:16:42.280 "data_offset": 0, 00:16:42.280 "data_size": 63488 00:16:42.280 }, 00:16:42.280 { 00:16:42.280 "name": "BaseBdev2", 00:16:42.280 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:42.280 "is_configured": true, 00:16:42.280 "data_offset": 2048, 00:16:42.280 "data_size": 63488 00:16:42.280 }, 00:16:42.280 { 00:16:42.280 "name": "BaseBdev3", 00:16:42.280 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:42.280 "is_configured": true, 00:16:42.280 "data_offset": 2048, 00:16:42.280 "data_size": 63488 00:16:42.280 }, 00:16:42.280 { 00:16:42.280 "name": "BaseBdev4", 00:16:42.280 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:42.280 "is_configured": true, 00:16:42.280 "data_offset": 2048, 00:16:42.280 "data_size": 63488 00:16:42.280 } 00:16:42.280 ] 00:16:42.280 }' 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.280 [2024-11-26 13:29:30.757890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.280 [2024-11-26 13:29:30.758010] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:42.280 [2024-11-26 13:29:30.758034] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:42.280 request: 00:16:42.280 { 00:16:42.280 "base_bdev": "BaseBdev1", 00:16:42.280 "raid_bdev": "raid_bdev1", 00:16:42.280 "method": "bdev_raid_add_base_bdev", 00:16:42.280 "req_id": 1 00:16:42.280 } 00:16:42.280 Got JSON-RPC error response 00:16:42.280 response: 00:16:42.280 { 00:16:42.280 "code": -22, 00:16:42.280 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:42.280 } 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:42.280 13:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:43.217 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:43.217 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.217 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.217 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.217 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.217 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.217 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.217 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.217 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.217 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.217 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.217 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.217 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.217 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.476 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.476 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.476 "name": "raid_bdev1", 00:16:43.476 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:43.476 "strip_size_kb": 64, 00:16:43.476 "state": "online", 00:16:43.476 "raid_level": "raid5f", 00:16:43.476 "superblock": true, 00:16:43.476 "num_base_bdevs": 4, 00:16:43.476 "num_base_bdevs_discovered": 3, 00:16:43.476 "num_base_bdevs_operational": 3, 00:16:43.476 "base_bdevs_list": [ 00:16:43.476 { 00:16:43.476 "name": null, 00:16:43.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.476 "is_configured": false, 00:16:43.476 "data_offset": 0, 00:16:43.476 "data_size": 63488 00:16:43.476 }, 00:16:43.476 { 00:16:43.476 "name": "BaseBdev2", 00:16:43.476 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:43.476 "is_configured": true, 00:16:43.476 "data_offset": 2048, 00:16:43.476 "data_size": 63488 00:16:43.476 }, 00:16:43.476 { 00:16:43.476 "name": "BaseBdev3", 00:16:43.476 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:43.476 "is_configured": true, 00:16:43.476 "data_offset": 2048, 00:16:43.476 "data_size": 63488 00:16:43.476 }, 00:16:43.476 { 00:16:43.476 "name": "BaseBdev4", 00:16:43.476 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:43.476 "is_configured": true, 00:16:43.476 "data_offset": 2048, 00:16:43.476 "data_size": 63488 00:16:43.476 } 00:16:43.476 ] 00:16:43.476 }' 00:16:43.476 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.476 13:29:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.735 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.735 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.735 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.735 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.735 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.735 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.735 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.735 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.735 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.735 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.994 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.994 "name": "raid_bdev1", 00:16:43.994 "uuid": "1a41b75a-6682-44ab-9149-1820ac660710", 00:16:43.994 "strip_size_kb": 64, 00:16:43.994 "state": "online", 00:16:43.994 "raid_level": "raid5f", 00:16:43.994 "superblock": true, 00:16:43.994 "num_base_bdevs": 4, 00:16:43.994 "num_base_bdevs_discovered": 3, 00:16:43.994 "num_base_bdevs_operational": 3, 00:16:43.994 "base_bdevs_list": [ 00:16:43.994 { 00:16:43.994 "name": null, 00:16:43.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.994 "is_configured": false, 00:16:43.994 "data_offset": 0, 00:16:43.995 "data_size": 63488 00:16:43.995 }, 00:16:43.995 { 00:16:43.995 "name": "BaseBdev2", 00:16:43.995 "uuid": "c866e1de-2646-5c27-accc-e77c0f664424", 00:16:43.995 "is_configured": true, 00:16:43.995 "data_offset": 2048, 00:16:43.995 "data_size": 63488 00:16:43.995 }, 00:16:43.995 { 00:16:43.995 "name": "BaseBdev3", 00:16:43.995 "uuid": "92c94da2-b2b5-53f4-b866-cdb3605f9bb3", 00:16:43.995 "is_configured": true, 00:16:43.995 "data_offset": 2048, 00:16:43.995 "data_size": 63488 00:16:43.995 }, 00:16:43.995 { 00:16:43.995 "name": "BaseBdev4", 00:16:43.995 "uuid": "9393e637-0489-54ba-b97a-260eaa4984fb", 00:16:43.995 "is_configured": true, 00:16:43.995 "data_offset": 2048, 00:16:43.995 "data_size": 63488 00:16:43.995 } 00:16:43.995 ] 00:16:43.995 }' 00:16:43.995 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.995 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.995 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.995 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.995 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84765 00:16:43.995 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84765 ']' 00:16:43.995 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84765 00:16:43.995 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:43.995 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.995 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84765 00:16:43.995 killing process with pid 84765 00:16:43.995 Received shutdown signal, test time was about 60.000000 seconds 00:16:43.995 00:16:43.995 Latency(us) 00:16:43.995 [2024-11-26T13:29:32.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.995 [2024-11-26T13:29:32.565Z] =================================================================================================================== 00:16:43.995 [2024-11-26T13:29:32.565Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:43.995 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.995 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.995 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84765' 00:16:43.995 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84765 00:16:43.995 [2024-11-26 13:29:32.463827] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.995 [2024-11-26 13:29:32.463918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.995 13:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84765 00:16:43.995 [2024-11-26 13:29:32.463991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.995 [2024-11-26 13:29:32.464010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:44.254 [2024-11-26 13:29:32.795662] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:45.191 13:29:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:45.191 00:16:45.191 real 0m27.761s 00:16:45.191 user 0m36.174s 00:16:45.191 sys 0m2.761s 00:16:45.191 13:29:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.191 ************************************ 00:16:45.191 13:29:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.191 END TEST raid5f_rebuild_test_sb 00:16:45.191 ************************************ 00:16:45.191 13:29:33 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:45.191 13:29:33 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:45.191 13:29:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:45.191 13:29:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.191 13:29:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.191 ************************************ 00:16:45.191 START TEST raid_state_function_test_sb_4k 00:16:45.191 ************************************ 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:45.191 Process raid pid: 85577 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85577 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85577' 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85577 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85577 ']' 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.191 13:29:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.450 [2024-11-26 13:29:33.808416] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:16:45.451 [2024-11-26 13:29:33.809553] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.451 [2024-11-26 13:29:33.998562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.710 [2024-11-26 13:29:34.096796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.710 [2024-11-26 13:29:34.266497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.710 [2024-11-26 13:29:34.266811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.277 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.278 [2024-11-26 13:29:34.743732] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.278 [2024-11-26 13:29:34.743787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.278 [2024-11-26 13:29:34.743801] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.278 [2024-11-26 13:29:34.743815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.278 "name": "Existed_Raid", 00:16:46.278 "uuid": "eb52c674-ab54-47db-92c9-0f71500b5631", 00:16:46.278 "strip_size_kb": 0, 00:16:46.278 "state": "configuring", 00:16:46.278 "raid_level": "raid1", 00:16:46.278 "superblock": true, 00:16:46.278 "num_base_bdevs": 2, 00:16:46.278 "num_base_bdevs_discovered": 0, 00:16:46.278 "num_base_bdevs_operational": 2, 00:16:46.278 "base_bdevs_list": [ 00:16:46.278 { 00:16:46.278 "name": "BaseBdev1", 00:16:46.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.278 "is_configured": false, 00:16:46.278 "data_offset": 0, 00:16:46.278 "data_size": 0 00:16:46.278 }, 00:16:46.278 { 00:16:46.278 "name": "BaseBdev2", 00:16:46.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.278 "is_configured": false, 00:16:46.278 "data_offset": 0, 00:16:46.278 "data_size": 0 00:16:46.278 } 00:16:46.278 ] 00:16:46.278 }' 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.278 13:29:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.846 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:46.846 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.846 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.846 [2024-11-26 13:29:35.203765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:46.847 [2024-11-26 13:29:35.203795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.847 [2024-11-26 13:29:35.215771] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.847 [2024-11-26 13:29:35.215936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.847 [2024-11-26 13:29:35.216046] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.847 [2024-11-26 13:29:35.216171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.847 [2024-11-26 13:29:35.258130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.847 BaseBdev1 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.847 [ 00:16:46.847 { 00:16:46.847 "name": "BaseBdev1", 00:16:46.847 "aliases": [ 00:16:46.847 "53235d2d-11eb-4a47-9aab-66ba6e2d26dd" 00:16:46.847 ], 00:16:46.847 "product_name": "Malloc disk", 00:16:46.847 "block_size": 4096, 00:16:46.847 "num_blocks": 8192, 00:16:46.847 "uuid": "53235d2d-11eb-4a47-9aab-66ba6e2d26dd", 00:16:46.847 "assigned_rate_limits": { 00:16:46.847 "rw_ios_per_sec": 0, 00:16:46.847 "rw_mbytes_per_sec": 0, 00:16:46.847 "r_mbytes_per_sec": 0, 00:16:46.847 "w_mbytes_per_sec": 0 00:16:46.847 }, 00:16:46.847 "claimed": true, 00:16:46.847 "claim_type": "exclusive_write", 00:16:46.847 "zoned": false, 00:16:46.847 "supported_io_types": { 00:16:46.847 "read": true, 00:16:46.847 "write": true, 00:16:46.847 "unmap": true, 00:16:46.847 "flush": true, 00:16:46.847 "reset": true, 00:16:46.847 "nvme_admin": false, 00:16:46.847 "nvme_io": false, 00:16:46.847 "nvme_io_md": false, 00:16:46.847 "write_zeroes": true, 00:16:46.847 "zcopy": true, 00:16:46.847 "get_zone_info": false, 00:16:46.847 "zone_management": false, 00:16:46.847 "zone_append": false, 00:16:46.847 "compare": false, 00:16:46.847 "compare_and_write": false, 00:16:46.847 "abort": true, 00:16:46.847 "seek_hole": false, 00:16:46.847 "seek_data": false, 00:16:46.847 "copy": true, 00:16:46.847 "nvme_iov_md": false 00:16:46.847 }, 00:16:46.847 "memory_domains": [ 00:16:46.847 { 00:16:46.847 "dma_device_id": "system", 00:16:46.847 "dma_device_type": 1 00:16:46.847 }, 00:16:46.847 { 00:16:46.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.847 "dma_device_type": 2 00:16:46.847 } 00:16:46.847 ], 00:16:46.847 "driver_specific": {} 00:16:46.847 } 00:16:46.847 ] 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.847 "name": "Existed_Raid", 00:16:46.847 "uuid": "b81ce052-dc71-4dc4-bcba-64c3a9309f8a", 00:16:46.847 "strip_size_kb": 0, 00:16:46.847 "state": "configuring", 00:16:46.847 "raid_level": "raid1", 00:16:46.847 "superblock": true, 00:16:46.847 "num_base_bdevs": 2, 00:16:46.847 "num_base_bdevs_discovered": 1, 00:16:46.847 "num_base_bdevs_operational": 2, 00:16:46.847 "base_bdevs_list": [ 00:16:46.847 { 00:16:46.847 "name": "BaseBdev1", 00:16:46.847 "uuid": "53235d2d-11eb-4a47-9aab-66ba6e2d26dd", 00:16:46.847 "is_configured": true, 00:16:46.847 "data_offset": 256, 00:16:46.847 "data_size": 7936 00:16:46.847 }, 00:16:46.847 { 00:16:46.847 "name": "BaseBdev2", 00:16:46.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.847 "is_configured": false, 00:16:46.847 "data_offset": 0, 00:16:46.847 "data_size": 0 00:16:46.847 } 00:16:46.847 ] 00:16:46.847 }' 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.847 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.416 [2024-11-26 13:29:35.798269] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.416 [2024-11-26 13:29:35.798441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.416 [2024-11-26 13:29:35.810328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.416 [2024-11-26 13:29:35.812487] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.416 [2024-11-26 13:29:35.812665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.416 "name": "Existed_Raid", 00:16:47.416 "uuid": "e4c5eb4c-cdbd-4924-8ed8-73f77ec545d9", 00:16:47.416 "strip_size_kb": 0, 00:16:47.416 "state": "configuring", 00:16:47.416 "raid_level": "raid1", 00:16:47.416 "superblock": true, 00:16:47.416 "num_base_bdevs": 2, 00:16:47.416 "num_base_bdevs_discovered": 1, 00:16:47.416 "num_base_bdevs_operational": 2, 00:16:47.416 "base_bdevs_list": [ 00:16:47.416 { 00:16:47.416 "name": "BaseBdev1", 00:16:47.416 "uuid": "53235d2d-11eb-4a47-9aab-66ba6e2d26dd", 00:16:47.416 "is_configured": true, 00:16:47.416 "data_offset": 256, 00:16:47.416 "data_size": 7936 00:16:47.416 }, 00:16:47.416 { 00:16:47.416 "name": "BaseBdev2", 00:16:47.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.416 "is_configured": false, 00:16:47.416 "data_offset": 0, 00:16:47.416 "data_size": 0 00:16:47.416 } 00:16:47.416 ] 00:16:47.416 }' 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.416 13:29:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.984 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.985 [2024-11-26 13:29:36.370700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:47.985 [2024-11-26 13:29:36.370939] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:47.985 [2024-11-26 13:29:36.370957] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:47.985 BaseBdev2 00:16:47.985 [2024-11-26 13:29:36.371221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:47.985 [2024-11-26 13:29:36.371409] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:47.985 [2024-11-26 13:29:36.371427] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:47.985 [2024-11-26 13:29:36.371568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.985 [ 00:16:47.985 { 00:16:47.985 "name": "BaseBdev2", 00:16:47.985 "aliases": [ 00:16:47.985 "2a49011f-2376-44c6-b391-d7e1fd4d5dee" 00:16:47.985 ], 00:16:47.985 "product_name": "Malloc disk", 00:16:47.985 "block_size": 4096, 00:16:47.985 "num_blocks": 8192, 00:16:47.985 "uuid": "2a49011f-2376-44c6-b391-d7e1fd4d5dee", 00:16:47.985 "assigned_rate_limits": { 00:16:47.985 "rw_ios_per_sec": 0, 00:16:47.985 "rw_mbytes_per_sec": 0, 00:16:47.985 "r_mbytes_per_sec": 0, 00:16:47.985 "w_mbytes_per_sec": 0 00:16:47.985 }, 00:16:47.985 "claimed": true, 00:16:47.985 "claim_type": "exclusive_write", 00:16:47.985 "zoned": false, 00:16:47.985 "supported_io_types": { 00:16:47.985 "read": true, 00:16:47.985 "write": true, 00:16:47.985 "unmap": true, 00:16:47.985 "flush": true, 00:16:47.985 "reset": true, 00:16:47.985 "nvme_admin": false, 00:16:47.985 "nvme_io": false, 00:16:47.985 "nvme_io_md": false, 00:16:47.985 "write_zeroes": true, 00:16:47.985 "zcopy": true, 00:16:47.985 "get_zone_info": false, 00:16:47.985 "zone_management": false, 00:16:47.985 "zone_append": false, 00:16:47.985 "compare": false, 00:16:47.985 "compare_and_write": false, 00:16:47.985 "abort": true, 00:16:47.985 "seek_hole": false, 00:16:47.985 "seek_data": false, 00:16:47.985 "copy": true, 00:16:47.985 "nvme_iov_md": false 00:16:47.985 }, 00:16:47.985 "memory_domains": [ 00:16:47.985 { 00:16:47.985 "dma_device_id": "system", 00:16:47.985 "dma_device_type": 1 00:16:47.985 }, 00:16:47.985 { 00:16:47.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.985 "dma_device_type": 2 00:16:47.985 } 00:16:47.985 ], 00:16:47.985 "driver_specific": {} 00:16:47.985 } 00:16:47.985 ] 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.985 "name": "Existed_Raid", 00:16:47.985 "uuid": "e4c5eb4c-cdbd-4924-8ed8-73f77ec545d9", 00:16:47.985 "strip_size_kb": 0, 00:16:47.985 "state": "online", 00:16:47.985 "raid_level": "raid1", 00:16:47.985 "superblock": true, 00:16:47.985 "num_base_bdevs": 2, 00:16:47.985 "num_base_bdevs_discovered": 2, 00:16:47.985 "num_base_bdevs_operational": 2, 00:16:47.985 "base_bdevs_list": [ 00:16:47.985 { 00:16:47.985 "name": "BaseBdev1", 00:16:47.985 "uuid": "53235d2d-11eb-4a47-9aab-66ba6e2d26dd", 00:16:47.985 "is_configured": true, 00:16:47.985 "data_offset": 256, 00:16:47.985 "data_size": 7936 00:16:47.985 }, 00:16:47.985 { 00:16:47.985 "name": "BaseBdev2", 00:16:47.985 "uuid": "2a49011f-2376-44c6-b391-d7e1fd4d5dee", 00:16:47.985 "is_configured": true, 00:16:47.985 "data_offset": 256, 00:16:47.985 "data_size": 7936 00:16:47.985 } 00:16:47.985 ] 00:16:47.985 }' 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.985 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.554 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:48.554 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:48.554 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:48.554 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:48.554 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:48.554 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:48.554 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:48.554 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.554 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:48.554 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.554 [2024-11-26 13:29:36.943088] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.554 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.554 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:48.554 "name": "Existed_Raid", 00:16:48.554 "aliases": [ 00:16:48.554 "e4c5eb4c-cdbd-4924-8ed8-73f77ec545d9" 00:16:48.554 ], 00:16:48.554 "product_name": "Raid Volume", 00:16:48.554 "block_size": 4096, 00:16:48.554 "num_blocks": 7936, 00:16:48.554 "uuid": "e4c5eb4c-cdbd-4924-8ed8-73f77ec545d9", 00:16:48.554 "assigned_rate_limits": { 00:16:48.554 "rw_ios_per_sec": 0, 00:16:48.554 "rw_mbytes_per_sec": 0, 00:16:48.554 "r_mbytes_per_sec": 0, 00:16:48.554 "w_mbytes_per_sec": 0 00:16:48.554 }, 00:16:48.554 "claimed": false, 00:16:48.554 "zoned": false, 00:16:48.554 "supported_io_types": { 00:16:48.554 "read": true, 00:16:48.554 "write": true, 00:16:48.554 "unmap": false, 00:16:48.554 "flush": false, 00:16:48.554 "reset": true, 00:16:48.554 "nvme_admin": false, 00:16:48.554 "nvme_io": false, 00:16:48.554 "nvme_io_md": false, 00:16:48.554 "write_zeroes": true, 00:16:48.554 "zcopy": false, 00:16:48.554 "get_zone_info": false, 00:16:48.554 "zone_management": false, 00:16:48.554 "zone_append": false, 00:16:48.554 "compare": false, 00:16:48.554 "compare_and_write": false, 00:16:48.554 "abort": false, 00:16:48.554 "seek_hole": false, 00:16:48.554 "seek_data": false, 00:16:48.554 "copy": false, 00:16:48.554 "nvme_iov_md": false 00:16:48.554 }, 00:16:48.554 "memory_domains": [ 00:16:48.554 { 00:16:48.554 "dma_device_id": "system", 00:16:48.554 "dma_device_type": 1 00:16:48.554 }, 00:16:48.554 { 00:16:48.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.554 "dma_device_type": 2 00:16:48.554 }, 00:16:48.554 { 00:16:48.554 "dma_device_id": "system", 00:16:48.554 "dma_device_type": 1 00:16:48.554 }, 00:16:48.554 { 00:16:48.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.554 "dma_device_type": 2 00:16:48.554 } 00:16:48.554 ], 00:16:48.554 "driver_specific": { 00:16:48.554 "raid": { 00:16:48.554 "uuid": "e4c5eb4c-cdbd-4924-8ed8-73f77ec545d9", 00:16:48.554 "strip_size_kb": 0, 00:16:48.554 "state": "online", 00:16:48.554 "raid_level": "raid1", 00:16:48.554 "superblock": true, 00:16:48.554 "num_base_bdevs": 2, 00:16:48.555 "num_base_bdevs_discovered": 2, 00:16:48.555 "num_base_bdevs_operational": 2, 00:16:48.555 "base_bdevs_list": [ 00:16:48.555 { 00:16:48.555 "name": "BaseBdev1", 00:16:48.555 "uuid": "53235d2d-11eb-4a47-9aab-66ba6e2d26dd", 00:16:48.555 "is_configured": true, 00:16:48.555 "data_offset": 256, 00:16:48.555 "data_size": 7936 00:16:48.555 }, 00:16:48.555 { 00:16:48.555 "name": "BaseBdev2", 00:16:48.555 "uuid": "2a49011f-2376-44c6-b391-d7e1fd4d5dee", 00:16:48.555 "is_configured": true, 00:16:48.555 "data_offset": 256, 00:16:48.555 "data_size": 7936 00:16:48.555 } 00:16:48.555 ] 00:16:48.555 } 00:16:48.555 } 00:16:48.555 }' 00:16:48.555 13:29:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:48.555 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:48.555 BaseBdev2' 00:16:48.555 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.555 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:48.555 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.555 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.555 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:48.555 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.555 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.814 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.814 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:48.814 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:48.814 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.814 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:48.814 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.814 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.814 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.814 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.814 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:48.814 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:48.814 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:48.814 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.814 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.814 [2024-11-26 13:29:37.210931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:48.814 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.815 "name": "Existed_Raid", 00:16:48.815 "uuid": "e4c5eb4c-cdbd-4924-8ed8-73f77ec545d9", 00:16:48.815 "strip_size_kb": 0, 00:16:48.815 "state": "online", 00:16:48.815 "raid_level": "raid1", 00:16:48.815 "superblock": true, 00:16:48.815 "num_base_bdevs": 2, 00:16:48.815 "num_base_bdevs_discovered": 1, 00:16:48.815 "num_base_bdevs_operational": 1, 00:16:48.815 "base_bdevs_list": [ 00:16:48.815 { 00:16:48.815 "name": null, 00:16:48.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.815 "is_configured": false, 00:16:48.815 "data_offset": 0, 00:16:48.815 "data_size": 7936 00:16:48.815 }, 00:16:48.815 { 00:16:48.815 "name": "BaseBdev2", 00:16:48.815 "uuid": "2a49011f-2376-44c6-b391-d7e1fd4d5dee", 00:16:48.815 "is_configured": true, 00:16:48.815 "data_offset": 256, 00:16:48.815 "data_size": 7936 00:16:48.815 } 00:16:48.815 ] 00:16:48.815 }' 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.815 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.380 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:49.380 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.380 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.380 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:49.380 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.381 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.381 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.381 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:49.381 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:49.381 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:49.381 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.381 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.381 [2024-11-26 13:29:37.847527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:49.381 [2024-11-26 13:29:37.847631] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.381 [2024-11-26 13:29:37.911901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.381 [2024-11-26 13:29:37.911952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.381 [2024-11-26 13:29:37.911969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:49.381 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.381 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:49.381 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.381 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.381 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.381 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.381 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:49.381 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.639 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:49.639 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:49.639 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:49.639 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85577 00:16:49.639 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85577 ']' 00:16:49.639 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85577 00:16:49.639 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:49.639 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.639 13:29:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85577 00:16:49.639 13:29:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.639 13:29:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.639 killing process with pid 85577 00:16:49.639 13:29:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85577' 00:16:49.639 13:29:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85577 00:16:49.639 [2024-11-26 13:29:38.002519] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.639 13:29:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85577 00:16:49.639 [2024-11-26 13:29:38.014316] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:50.590 13:29:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:50.590 00:16:50.590 real 0m5.161s 00:16:50.590 user 0m7.901s 00:16:50.590 sys 0m0.786s 00:16:50.590 13:29:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.590 13:29:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.590 ************************************ 00:16:50.590 END TEST raid_state_function_test_sb_4k 00:16:50.590 ************************************ 00:16:50.590 13:29:38 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:50.590 13:29:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:50.590 13:29:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.590 13:29:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.590 ************************************ 00:16:50.590 START TEST raid_superblock_test_4k 00:16:50.590 ************************************ 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85829 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85829 00:16:50.590 13:29:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:50.591 13:29:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 85829 ']' 00:16:50.591 13:29:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.591 13:29:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.591 13:29:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.591 13:29:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.591 13:29:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.591 [2024-11-26 13:29:39.017059] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:16:50.591 [2024-11-26 13:29:39.017292] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85829 ] 00:16:50.864 [2024-11-26 13:29:39.197947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.864 [2024-11-26 13:29:39.294641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.122 [2024-11-26 13:29:39.460490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.122 [2024-11-26 13:29:39.460536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.380 malloc1 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.380 [2024-11-26 13:29:39.933212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:51.380 [2024-11-26 13:29:39.933290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.380 [2024-11-26 13:29:39.933321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:51.380 [2024-11-26 13:29:39.933333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.380 [2024-11-26 13:29:39.935634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.380 [2024-11-26 13:29:39.935672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:51.380 pt1 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.380 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.639 malloc2 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.639 [2024-11-26 13:29:39.978869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:51.639 [2024-11-26 13:29:39.978923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.639 [2024-11-26 13:29:39.978949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:51.639 [2024-11-26 13:29:39.978961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.639 [2024-11-26 13:29:39.981320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.639 [2024-11-26 13:29:39.981358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:51.639 pt2 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.639 [2024-11-26 13:29:39.986955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:51.639 [2024-11-26 13:29:39.989033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.639 [2024-11-26 13:29:39.989267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:51.639 [2024-11-26 13:29:39.989296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:51.639 [2024-11-26 13:29:39.989564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:51.639 [2024-11-26 13:29:39.989756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:51.639 [2024-11-26 13:29:39.989801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:51.639 [2024-11-26 13:29:39.989961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.639 13:29:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.639 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.639 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.639 "name": "raid_bdev1", 00:16:51.639 "uuid": "550dafc3-3b41-4e29-8280-4c94db59520c", 00:16:51.639 "strip_size_kb": 0, 00:16:51.639 "state": "online", 00:16:51.639 "raid_level": "raid1", 00:16:51.639 "superblock": true, 00:16:51.639 "num_base_bdevs": 2, 00:16:51.639 "num_base_bdevs_discovered": 2, 00:16:51.639 "num_base_bdevs_operational": 2, 00:16:51.639 "base_bdevs_list": [ 00:16:51.639 { 00:16:51.639 "name": "pt1", 00:16:51.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.639 "is_configured": true, 00:16:51.639 "data_offset": 256, 00:16:51.639 "data_size": 7936 00:16:51.639 }, 00:16:51.639 { 00:16:51.639 "name": "pt2", 00:16:51.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.639 "is_configured": true, 00:16:51.639 "data_offset": 256, 00:16:51.639 "data_size": 7936 00:16:51.639 } 00:16:51.639 ] 00:16:51.639 }' 00:16:51.639 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.639 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.203 [2024-11-26 13:29:40.475343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:52.203 "name": "raid_bdev1", 00:16:52.203 "aliases": [ 00:16:52.203 "550dafc3-3b41-4e29-8280-4c94db59520c" 00:16:52.203 ], 00:16:52.203 "product_name": "Raid Volume", 00:16:52.203 "block_size": 4096, 00:16:52.203 "num_blocks": 7936, 00:16:52.203 "uuid": "550dafc3-3b41-4e29-8280-4c94db59520c", 00:16:52.203 "assigned_rate_limits": { 00:16:52.203 "rw_ios_per_sec": 0, 00:16:52.203 "rw_mbytes_per_sec": 0, 00:16:52.203 "r_mbytes_per_sec": 0, 00:16:52.203 "w_mbytes_per_sec": 0 00:16:52.203 }, 00:16:52.203 "claimed": false, 00:16:52.203 "zoned": false, 00:16:52.203 "supported_io_types": { 00:16:52.203 "read": true, 00:16:52.203 "write": true, 00:16:52.203 "unmap": false, 00:16:52.203 "flush": false, 00:16:52.203 "reset": true, 00:16:52.203 "nvme_admin": false, 00:16:52.203 "nvme_io": false, 00:16:52.203 "nvme_io_md": false, 00:16:52.203 "write_zeroes": true, 00:16:52.203 "zcopy": false, 00:16:52.203 "get_zone_info": false, 00:16:52.203 "zone_management": false, 00:16:52.203 "zone_append": false, 00:16:52.203 "compare": false, 00:16:52.203 "compare_and_write": false, 00:16:52.203 "abort": false, 00:16:52.203 "seek_hole": false, 00:16:52.203 "seek_data": false, 00:16:52.203 "copy": false, 00:16:52.203 "nvme_iov_md": false 00:16:52.203 }, 00:16:52.203 "memory_domains": [ 00:16:52.203 { 00:16:52.203 "dma_device_id": "system", 00:16:52.203 "dma_device_type": 1 00:16:52.203 }, 00:16:52.203 { 00:16:52.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.203 "dma_device_type": 2 00:16:52.203 }, 00:16:52.203 { 00:16:52.203 "dma_device_id": "system", 00:16:52.203 "dma_device_type": 1 00:16:52.203 }, 00:16:52.203 { 00:16:52.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.203 "dma_device_type": 2 00:16:52.203 } 00:16:52.203 ], 00:16:52.203 "driver_specific": { 00:16:52.203 "raid": { 00:16:52.203 "uuid": "550dafc3-3b41-4e29-8280-4c94db59520c", 00:16:52.203 "strip_size_kb": 0, 00:16:52.203 "state": "online", 00:16:52.203 "raid_level": "raid1", 00:16:52.203 "superblock": true, 00:16:52.203 "num_base_bdevs": 2, 00:16:52.203 "num_base_bdevs_discovered": 2, 00:16:52.203 "num_base_bdevs_operational": 2, 00:16:52.203 "base_bdevs_list": [ 00:16:52.203 { 00:16:52.203 "name": "pt1", 00:16:52.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:52.203 "is_configured": true, 00:16:52.203 "data_offset": 256, 00:16:52.203 "data_size": 7936 00:16:52.203 }, 00:16:52.203 { 00:16:52.203 "name": "pt2", 00:16:52.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.203 "is_configured": true, 00:16:52.203 "data_offset": 256, 00:16:52.203 "data_size": 7936 00:16:52.203 } 00:16:52.203 ] 00:16:52.203 } 00:16:52.203 } 00:16:52.203 }' 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:52.203 pt2' 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.203 [2024-11-26 13:29:40.747311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.203 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=550dafc3-3b41-4e29-8280-4c94db59520c 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 550dafc3-3b41-4e29-8280-4c94db59520c ']' 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.461 [2024-11-26 13:29:40.791034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.461 [2024-11-26 13:29:40.791059] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.461 [2024-11-26 13:29:40.791124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.461 [2024-11-26 13:29:40.791179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.461 [2024-11-26 13:29:40.791198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.461 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.461 [2024-11-26 13:29:40.931085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:52.461 [2024-11-26 13:29:40.933111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:52.461 [2024-11-26 13:29:40.933182] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:52.461 [2024-11-26 13:29:40.933255] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:52.461 [2024-11-26 13:29:40.933279] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.461 [2024-11-26 13:29:40.933292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:52.461 request: 00:16:52.461 { 00:16:52.461 "name": "raid_bdev1", 00:16:52.461 "raid_level": "raid1", 00:16:52.461 "base_bdevs": [ 00:16:52.461 "malloc1", 00:16:52.461 "malloc2" 00:16:52.461 ], 00:16:52.461 "superblock": false, 00:16:52.461 "method": "bdev_raid_create", 00:16:52.461 "req_id": 1 00:16:52.461 } 00:16:52.461 Got JSON-RPC error response 00:16:52.462 response: 00:16:52.462 { 00:16:52.462 "code": -17, 00:16:52.462 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:52.462 } 00:16:52.462 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:52.462 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:16:52.462 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:52.462 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:52.462 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:52.462 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.462 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.462 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:52.462 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.462 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.462 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:52.462 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:52.462 13:29:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:52.462 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.462 13:29:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.462 [2024-11-26 13:29:40.999086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:52.462 [2024-11-26 13:29:40.999137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.462 [2024-11-26 13:29:40.999156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:52.462 [2024-11-26 13:29:40.999170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.462 [2024-11-26 13:29:41.001484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.462 [2024-11-26 13:29:41.001527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:52.462 [2024-11-26 13:29:41.001598] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:52.462 [2024-11-26 13:29:41.001662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:52.462 pt1 00:16:52.462 13:29:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.462 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:52.462 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.462 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.462 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.462 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.462 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.462 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.462 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.462 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.462 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.462 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.462 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.462 13:29:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.462 13:29:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.719 13:29:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.719 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.719 "name": "raid_bdev1", 00:16:52.719 "uuid": "550dafc3-3b41-4e29-8280-4c94db59520c", 00:16:52.719 "strip_size_kb": 0, 00:16:52.719 "state": "configuring", 00:16:52.719 "raid_level": "raid1", 00:16:52.719 "superblock": true, 00:16:52.719 "num_base_bdevs": 2, 00:16:52.719 "num_base_bdevs_discovered": 1, 00:16:52.719 "num_base_bdevs_operational": 2, 00:16:52.719 "base_bdevs_list": [ 00:16:52.719 { 00:16:52.719 "name": "pt1", 00:16:52.719 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:52.719 "is_configured": true, 00:16:52.719 "data_offset": 256, 00:16:52.719 "data_size": 7936 00:16:52.719 }, 00:16:52.719 { 00:16:52.719 "name": null, 00:16:52.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.719 "is_configured": false, 00:16:52.719 "data_offset": 256, 00:16:52.719 "data_size": 7936 00:16:52.719 } 00:16:52.719 ] 00:16:52.719 }' 00:16:52.719 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.719 13:29:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.977 [2024-11-26 13:29:41.467180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:52.977 [2024-11-26 13:29:41.467244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.977 [2024-11-26 13:29:41.467266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:52.977 [2024-11-26 13:29:41.467279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.977 [2024-11-26 13:29:41.467707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.977 [2024-11-26 13:29:41.467741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:52.977 [2024-11-26 13:29:41.467803] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:52.977 [2024-11-26 13:29:41.467832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:52.977 [2024-11-26 13:29:41.467939] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:52.977 [2024-11-26 13:29:41.467958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:52.977 [2024-11-26 13:29:41.468190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:52.977 [2024-11-26 13:29:41.468407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:52.977 [2024-11-26 13:29:41.468438] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:52.977 [2024-11-26 13:29:41.468579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.977 pt2 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.977 "name": "raid_bdev1", 00:16:52.977 "uuid": "550dafc3-3b41-4e29-8280-4c94db59520c", 00:16:52.977 "strip_size_kb": 0, 00:16:52.977 "state": "online", 00:16:52.977 "raid_level": "raid1", 00:16:52.977 "superblock": true, 00:16:52.977 "num_base_bdevs": 2, 00:16:52.977 "num_base_bdevs_discovered": 2, 00:16:52.977 "num_base_bdevs_operational": 2, 00:16:52.977 "base_bdevs_list": [ 00:16:52.977 { 00:16:52.977 "name": "pt1", 00:16:52.977 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:52.977 "is_configured": true, 00:16:52.977 "data_offset": 256, 00:16:52.977 "data_size": 7936 00:16:52.977 }, 00:16:52.977 { 00:16:52.977 "name": "pt2", 00:16:52.977 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.977 "is_configured": true, 00:16:52.977 "data_offset": 256, 00:16:52.977 "data_size": 7936 00:16:52.977 } 00:16:52.977 ] 00:16:52.977 }' 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.977 13:29:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.544 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:53.544 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:53.544 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:53.544 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:53.544 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:53.544 13:29:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:53.544 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:53.544 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:53.544 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.544 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.544 [2024-11-26 13:29:42.007546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.544 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.544 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:53.544 "name": "raid_bdev1", 00:16:53.544 "aliases": [ 00:16:53.544 "550dafc3-3b41-4e29-8280-4c94db59520c" 00:16:53.544 ], 00:16:53.544 "product_name": "Raid Volume", 00:16:53.544 "block_size": 4096, 00:16:53.544 "num_blocks": 7936, 00:16:53.544 "uuid": "550dafc3-3b41-4e29-8280-4c94db59520c", 00:16:53.544 "assigned_rate_limits": { 00:16:53.544 "rw_ios_per_sec": 0, 00:16:53.544 "rw_mbytes_per_sec": 0, 00:16:53.544 "r_mbytes_per_sec": 0, 00:16:53.544 "w_mbytes_per_sec": 0 00:16:53.544 }, 00:16:53.544 "claimed": false, 00:16:53.544 "zoned": false, 00:16:53.544 "supported_io_types": { 00:16:53.544 "read": true, 00:16:53.544 "write": true, 00:16:53.544 "unmap": false, 00:16:53.544 "flush": false, 00:16:53.544 "reset": true, 00:16:53.544 "nvme_admin": false, 00:16:53.544 "nvme_io": false, 00:16:53.544 "nvme_io_md": false, 00:16:53.544 "write_zeroes": true, 00:16:53.544 "zcopy": false, 00:16:53.544 "get_zone_info": false, 00:16:53.544 "zone_management": false, 00:16:53.544 "zone_append": false, 00:16:53.544 "compare": false, 00:16:53.544 "compare_and_write": false, 00:16:53.544 "abort": false, 00:16:53.544 "seek_hole": false, 00:16:53.544 "seek_data": false, 00:16:53.544 "copy": false, 00:16:53.544 "nvme_iov_md": false 00:16:53.544 }, 00:16:53.544 "memory_domains": [ 00:16:53.544 { 00:16:53.544 "dma_device_id": "system", 00:16:53.544 "dma_device_type": 1 00:16:53.544 }, 00:16:53.544 { 00:16:53.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.544 "dma_device_type": 2 00:16:53.544 }, 00:16:53.544 { 00:16:53.544 "dma_device_id": "system", 00:16:53.544 "dma_device_type": 1 00:16:53.544 }, 00:16:53.544 { 00:16:53.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.544 "dma_device_type": 2 00:16:53.544 } 00:16:53.544 ], 00:16:53.544 "driver_specific": { 00:16:53.544 "raid": { 00:16:53.544 "uuid": "550dafc3-3b41-4e29-8280-4c94db59520c", 00:16:53.544 "strip_size_kb": 0, 00:16:53.544 "state": "online", 00:16:53.544 "raid_level": "raid1", 00:16:53.544 "superblock": true, 00:16:53.544 "num_base_bdevs": 2, 00:16:53.544 "num_base_bdevs_discovered": 2, 00:16:53.544 "num_base_bdevs_operational": 2, 00:16:53.544 "base_bdevs_list": [ 00:16:53.544 { 00:16:53.544 "name": "pt1", 00:16:53.544 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:53.544 "is_configured": true, 00:16:53.544 "data_offset": 256, 00:16:53.544 "data_size": 7936 00:16:53.544 }, 00:16:53.544 { 00:16:53.544 "name": "pt2", 00:16:53.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.544 "is_configured": true, 00:16:53.544 "data_offset": 256, 00:16:53.544 "data_size": 7936 00:16:53.544 } 00:16:53.544 ] 00:16:53.544 } 00:16:53.544 } 00:16:53.544 }' 00:16:53.544 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:53.544 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:53.544 pt2' 00:16:53.544 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:53.803 [2024-11-26 13:29:42.275613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 550dafc3-3b41-4e29-8280-4c94db59520c '!=' 550dafc3-3b41-4e29-8280-4c94db59520c ']' 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.803 [2024-11-26 13:29:42.323411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.803 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.062 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.062 "name": "raid_bdev1", 00:16:54.062 "uuid": "550dafc3-3b41-4e29-8280-4c94db59520c", 00:16:54.062 "strip_size_kb": 0, 00:16:54.062 "state": "online", 00:16:54.062 "raid_level": "raid1", 00:16:54.062 "superblock": true, 00:16:54.062 "num_base_bdevs": 2, 00:16:54.062 "num_base_bdevs_discovered": 1, 00:16:54.062 "num_base_bdevs_operational": 1, 00:16:54.062 "base_bdevs_list": [ 00:16:54.062 { 00:16:54.062 "name": null, 00:16:54.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.062 "is_configured": false, 00:16:54.062 "data_offset": 0, 00:16:54.062 "data_size": 7936 00:16:54.062 }, 00:16:54.062 { 00:16:54.062 "name": "pt2", 00:16:54.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.062 "is_configured": true, 00:16:54.062 "data_offset": 256, 00:16:54.062 "data_size": 7936 00:16:54.062 } 00:16:54.062 ] 00:16:54.062 }' 00:16:54.062 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.062 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.320 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.320 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.320 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.320 [2024-11-26 13:29:42.839517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.320 [2024-11-26 13:29:42.839702] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.320 [2024-11-26 13:29:42.839774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.320 [2024-11-26 13:29:42.839821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.320 [2024-11-26 13:29:42.839838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:54.320 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.320 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.320 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:54.320 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.320 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.320 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.579 [2024-11-26 13:29:42.915527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:54.579 [2024-11-26 13:29:42.915722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.579 [2024-11-26 13:29:42.915751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:54.579 [2024-11-26 13:29:42.915766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.579 [2024-11-26 13:29:42.918090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.579 [2024-11-26 13:29:42.918133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:54.579 [2024-11-26 13:29:42.918203] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:54.579 [2024-11-26 13:29:42.918277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:54.579 [2024-11-26 13:29:42.918378] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:54.579 [2024-11-26 13:29:42.918397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:54.579 [2024-11-26 13:29:42.918646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:54.579 [2024-11-26 13:29:42.918806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:54.579 [2024-11-26 13:29:42.918823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:54.579 [2024-11-26 13:29:42.918990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.579 pt2 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.579 "name": "raid_bdev1", 00:16:54.579 "uuid": "550dafc3-3b41-4e29-8280-4c94db59520c", 00:16:54.579 "strip_size_kb": 0, 00:16:54.579 "state": "online", 00:16:54.579 "raid_level": "raid1", 00:16:54.579 "superblock": true, 00:16:54.579 "num_base_bdevs": 2, 00:16:54.579 "num_base_bdevs_discovered": 1, 00:16:54.579 "num_base_bdevs_operational": 1, 00:16:54.579 "base_bdevs_list": [ 00:16:54.579 { 00:16:54.579 "name": null, 00:16:54.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.579 "is_configured": false, 00:16:54.579 "data_offset": 256, 00:16:54.579 "data_size": 7936 00:16:54.579 }, 00:16:54.579 { 00:16:54.579 "name": "pt2", 00:16:54.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.579 "is_configured": true, 00:16:54.579 "data_offset": 256, 00:16:54.579 "data_size": 7936 00:16:54.579 } 00:16:54.579 ] 00:16:54.579 }' 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.579 13:29:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.147 [2024-11-26 13:29:43.443613] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:55.147 [2024-11-26 13:29:43.443781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.147 [2024-11-26 13:29:43.443852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.147 [2024-11-26 13:29:43.443906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.147 [2024-11-26 13:29:43.443920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.147 [2024-11-26 13:29:43.503646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:55.147 [2024-11-26 13:29:43.503694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.147 [2024-11-26 13:29:43.503717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:55.147 [2024-11-26 13:29:43.503728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.147 [2024-11-26 13:29:43.505989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.147 [2024-11-26 13:29:43.506160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:55.147 [2024-11-26 13:29:43.506271] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:55.147 [2024-11-26 13:29:43.506325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:55.147 [2024-11-26 13:29:43.506471] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:55.147 [2024-11-26 13:29:43.506502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:55.147 [2024-11-26 13:29:43.506521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:55.147 [2024-11-26 13:29:43.506586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:55.147 [2024-11-26 13:29:43.506692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:55.147 [2024-11-26 13:29:43.506705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:55.147 [2024-11-26 13:29:43.506961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:55.147 [2024-11-26 13:29:43.507111] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:55.147 [2024-11-26 13:29:43.507128] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:55.147 [2024-11-26 13:29:43.507297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.147 pt1 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.147 "name": "raid_bdev1", 00:16:55.147 "uuid": "550dafc3-3b41-4e29-8280-4c94db59520c", 00:16:55.147 "strip_size_kb": 0, 00:16:55.147 "state": "online", 00:16:55.147 "raid_level": "raid1", 00:16:55.147 "superblock": true, 00:16:55.147 "num_base_bdevs": 2, 00:16:55.147 "num_base_bdevs_discovered": 1, 00:16:55.147 "num_base_bdevs_operational": 1, 00:16:55.147 "base_bdevs_list": [ 00:16:55.147 { 00:16:55.147 "name": null, 00:16:55.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.147 "is_configured": false, 00:16:55.147 "data_offset": 256, 00:16:55.147 "data_size": 7936 00:16:55.147 }, 00:16:55.147 { 00:16:55.147 "name": "pt2", 00:16:55.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.147 "is_configured": true, 00:16:55.147 "data_offset": 256, 00:16:55.147 "data_size": 7936 00:16:55.147 } 00:16:55.147 ] 00:16:55.147 }' 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.147 13:29:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:55.715 [2024-11-26 13:29:44.087973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 550dafc3-3b41-4e29-8280-4c94db59520c '!=' 550dafc3-3b41-4e29-8280-4c94db59520c ']' 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85829 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 85829 ']' 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 85829 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85829 00:16:55.715 killing process with pid 85829 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85829' 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 85829 00:16:55.715 [2024-11-26 13:29:44.167413] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:55.715 [2024-11-26 13:29:44.167475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.715 [2024-11-26 13:29:44.167517] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.715 [2024-11-26 13:29:44.167536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:55.715 13:29:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 85829 00:16:55.973 [2024-11-26 13:29:44.307179] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:56.910 13:29:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:56.910 00:16:56.910 real 0m6.242s 00:16:56.910 user 0m10.019s 00:16:56.910 sys 0m0.965s 00:16:56.910 13:29:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.910 ************************************ 00:16:56.910 END TEST raid_superblock_test_4k 00:16:56.910 ************************************ 00:16:56.910 13:29:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.910 13:29:45 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:56.910 13:29:45 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:56.910 13:29:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:56.910 13:29:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.910 13:29:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.910 ************************************ 00:16:56.910 START TEST raid_rebuild_test_sb_4k 00:16:56.910 ************************************ 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86152 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86152 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86152 ']' 00:16:56.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.910 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.910 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:56.910 Zero copy mechanism will not be used. 00:16:56.910 [2024-11-26 13:29:45.338000] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:16:56.910 [2024-11-26 13:29:45.338201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86152 ] 00:16:57.169 [2024-11-26 13:29:45.523107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.169 [2024-11-26 13:29:45.619092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.428 [2024-11-26 13:29:45.786878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.428 [2024-11-26 13:29:45.786911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.997 BaseBdev1_malloc 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.997 [2024-11-26 13:29:46.310306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:57.997 [2024-11-26 13:29:46.310619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.997 [2024-11-26 13:29:46.310656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:57.997 [2024-11-26 13:29:46.310673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.997 [2024-11-26 13:29:46.312995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.997 [2024-11-26 13:29:46.313041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:57.997 BaseBdev1 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.997 BaseBdev2_malloc 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.997 [2024-11-26 13:29:46.355819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:57.997 [2024-11-26 13:29:46.355882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.997 [2024-11-26 13:29:46.355903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:57.997 [2024-11-26 13:29:46.355919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.997 [2024-11-26 13:29:46.358144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.997 [2024-11-26 13:29:46.358189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:57.997 BaseBdev2 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.997 spare_malloc 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.997 spare_delay 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.997 [2024-11-26 13:29:46.417817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:57.997 [2024-11-26 13:29:46.417880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.997 [2024-11-26 13:29:46.417905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:57.997 [2024-11-26 13:29:46.417920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.997 [2024-11-26 13:29:46.420247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.997 [2024-11-26 13:29:46.420291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:57.997 spare 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.997 [2024-11-26 13:29:46.425884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.997 [2024-11-26 13:29:46.427845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.997 [2024-11-26 13:29:46.428041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:57.997 [2024-11-26 13:29:46.428064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:57.997 [2024-11-26 13:29:46.428576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:57.997 [2024-11-26 13:29:46.428931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:57.997 [2024-11-26 13:29:46.429081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:57.997 [2024-11-26 13:29:46.429402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.997 "name": "raid_bdev1", 00:16:57.997 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:16:57.997 "strip_size_kb": 0, 00:16:57.997 "state": "online", 00:16:57.997 "raid_level": "raid1", 00:16:57.997 "superblock": true, 00:16:57.997 "num_base_bdevs": 2, 00:16:57.997 "num_base_bdevs_discovered": 2, 00:16:57.997 "num_base_bdevs_operational": 2, 00:16:57.997 "base_bdevs_list": [ 00:16:57.997 { 00:16:57.997 "name": "BaseBdev1", 00:16:57.997 "uuid": "e60ac90c-aa64-5c85-9cb7-6be2f20cc720", 00:16:57.997 "is_configured": true, 00:16:57.997 "data_offset": 256, 00:16:57.997 "data_size": 7936 00:16:57.997 }, 00:16:57.997 { 00:16:57.997 "name": "BaseBdev2", 00:16:57.997 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:16:57.997 "is_configured": true, 00:16:57.997 "data_offset": 256, 00:16:57.997 "data_size": 7936 00:16:57.997 } 00:16:57.997 ] 00:16:57.997 }' 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.997 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.565 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.565 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.565 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.565 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:58.565 [2024-11-26 13:29:46.926207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.565 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.565 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:58.565 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.565 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:58.565 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.565 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.565 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.565 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:58.565 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:58.565 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:58.565 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:58.565 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:58.565 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:58.565 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:58.565 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:58.565 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:58.565 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:58.565 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:58.565 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:58.565 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.565 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:58.824 [2024-11-26 13:29:47.222019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:58.824 /dev/nbd0 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.824 1+0 records in 00:16:58.824 1+0 records out 00:16:58.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494582 s, 8.3 MB/s 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:58.824 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:59.759 7936+0 records in 00:16:59.759 7936+0 records out 00:16:59.759 32505856 bytes (33 MB, 31 MiB) copied, 0.79431 s, 40.9 MB/s 00:16:59.759 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:59.759 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:59.759 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:59.759 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:59.759 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:59.759 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:59.759 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:00.018 [2024-11-26 13:29:48.356469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.018 [2024-11-26 13:29:48.364563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.018 "name": "raid_bdev1", 00:17:00.018 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:00.018 "strip_size_kb": 0, 00:17:00.018 "state": "online", 00:17:00.018 "raid_level": "raid1", 00:17:00.018 "superblock": true, 00:17:00.018 "num_base_bdevs": 2, 00:17:00.018 "num_base_bdevs_discovered": 1, 00:17:00.018 "num_base_bdevs_operational": 1, 00:17:00.018 "base_bdevs_list": [ 00:17:00.018 { 00:17:00.018 "name": null, 00:17:00.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.018 "is_configured": false, 00:17:00.018 "data_offset": 0, 00:17:00.018 "data_size": 7936 00:17:00.018 }, 00:17:00.018 { 00:17:00.018 "name": "BaseBdev2", 00:17:00.018 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:00.018 "is_configured": true, 00:17:00.018 "data_offset": 256, 00:17:00.018 "data_size": 7936 00:17:00.018 } 00:17:00.018 ] 00:17:00.018 }' 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.018 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.585 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:00.586 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.586 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.586 [2024-11-26 13:29:48.856646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.586 [2024-11-26 13:29:48.870047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:00.586 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.586 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:00.586 [2024-11-26 13:29:48.872030] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:01.520 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.520 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.520 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.520 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.520 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.520 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.520 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.520 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.520 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.520 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.520 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.520 "name": "raid_bdev1", 00:17:01.520 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:01.520 "strip_size_kb": 0, 00:17:01.520 "state": "online", 00:17:01.520 "raid_level": "raid1", 00:17:01.520 "superblock": true, 00:17:01.520 "num_base_bdevs": 2, 00:17:01.520 "num_base_bdevs_discovered": 2, 00:17:01.521 "num_base_bdevs_operational": 2, 00:17:01.521 "process": { 00:17:01.521 "type": "rebuild", 00:17:01.521 "target": "spare", 00:17:01.521 "progress": { 00:17:01.521 "blocks": 2560, 00:17:01.521 "percent": 32 00:17:01.521 } 00:17:01.521 }, 00:17:01.521 "base_bdevs_list": [ 00:17:01.521 { 00:17:01.521 "name": "spare", 00:17:01.521 "uuid": "7e403b11-0a33-5447-99f0-8d169856eb57", 00:17:01.521 "is_configured": true, 00:17:01.521 "data_offset": 256, 00:17:01.521 "data_size": 7936 00:17:01.521 }, 00:17:01.521 { 00:17:01.521 "name": "BaseBdev2", 00:17:01.521 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:01.521 "is_configured": true, 00:17:01.521 "data_offset": 256, 00:17:01.521 "data_size": 7936 00:17:01.521 } 00:17:01.521 ] 00:17:01.521 }' 00:17:01.521 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.521 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.521 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.521 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.521 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:01.521 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.521 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.521 [2024-11-26 13:29:50.041627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.521 [2024-11-26 13:29:50.079253] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:01.521 [2024-11-26 13:29:50.079477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.521 [2024-11-26 13:29:50.079503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.521 [2024-11-26 13:29:50.079518] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.780 "name": "raid_bdev1", 00:17:01.780 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:01.780 "strip_size_kb": 0, 00:17:01.780 "state": "online", 00:17:01.780 "raid_level": "raid1", 00:17:01.780 "superblock": true, 00:17:01.780 "num_base_bdevs": 2, 00:17:01.780 "num_base_bdevs_discovered": 1, 00:17:01.780 "num_base_bdevs_operational": 1, 00:17:01.780 "base_bdevs_list": [ 00:17:01.780 { 00:17:01.780 "name": null, 00:17:01.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.780 "is_configured": false, 00:17:01.780 "data_offset": 0, 00:17:01.780 "data_size": 7936 00:17:01.780 }, 00:17:01.780 { 00:17:01.780 "name": "BaseBdev2", 00:17:01.780 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:01.780 "is_configured": true, 00:17:01.780 "data_offset": 256, 00:17:01.780 "data_size": 7936 00:17:01.780 } 00:17:01.780 ] 00:17:01.780 }' 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.780 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.347 "name": "raid_bdev1", 00:17:02.347 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:02.347 "strip_size_kb": 0, 00:17:02.347 "state": "online", 00:17:02.347 "raid_level": "raid1", 00:17:02.347 "superblock": true, 00:17:02.347 "num_base_bdevs": 2, 00:17:02.347 "num_base_bdevs_discovered": 1, 00:17:02.347 "num_base_bdevs_operational": 1, 00:17:02.347 "base_bdevs_list": [ 00:17:02.347 { 00:17:02.347 "name": null, 00:17:02.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.347 "is_configured": false, 00:17:02.347 "data_offset": 0, 00:17:02.347 "data_size": 7936 00:17:02.347 }, 00:17:02.347 { 00:17:02.347 "name": "BaseBdev2", 00:17:02.347 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:02.347 "is_configured": true, 00:17:02.347 "data_offset": 256, 00:17:02.347 "data_size": 7936 00:17:02.347 } 00:17:02.347 ] 00:17:02.347 }' 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.347 [2024-11-26 13:29:50.791525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.347 [2024-11-26 13:29:50.802265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.347 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:02.347 [2024-11-26 13:29:50.804281] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:03.283 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.283 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.283 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.283 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.283 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.283 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.283 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.283 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.283 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.283 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.542 "name": "raid_bdev1", 00:17:03.542 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:03.542 "strip_size_kb": 0, 00:17:03.542 "state": "online", 00:17:03.542 "raid_level": "raid1", 00:17:03.542 "superblock": true, 00:17:03.542 "num_base_bdevs": 2, 00:17:03.542 "num_base_bdevs_discovered": 2, 00:17:03.542 "num_base_bdevs_operational": 2, 00:17:03.542 "process": { 00:17:03.542 "type": "rebuild", 00:17:03.542 "target": "spare", 00:17:03.542 "progress": { 00:17:03.542 "blocks": 2560, 00:17:03.542 "percent": 32 00:17:03.542 } 00:17:03.542 }, 00:17:03.542 "base_bdevs_list": [ 00:17:03.542 { 00:17:03.542 "name": "spare", 00:17:03.542 "uuid": "7e403b11-0a33-5447-99f0-8d169856eb57", 00:17:03.542 "is_configured": true, 00:17:03.542 "data_offset": 256, 00:17:03.542 "data_size": 7936 00:17:03.542 }, 00:17:03.542 { 00:17:03.542 "name": "BaseBdev2", 00:17:03.542 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:03.542 "is_configured": true, 00:17:03.542 "data_offset": 256, 00:17:03.542 "data_size": 7936 00:17:03.542 } 00:17:03.542 ] 00:17:03.542 }' 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:03.542 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=693 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.542 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.543 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.543 "name": "raid_bdev1", 00:17:03.543 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:03.543 "strip_size_kb": 0, 00:17:03.543 "state": "online", 00:17:03.543 "raid_level": "raid1", 00:17:03.543 "superblock": true, 00:17:03.543 "num_base_bdevs": 2, 00:17:03.543 "num_base_bdevs_discovered": 2, 00:17:03.543 "num_base_bdevs_operational": 2, 00:17:03.543 "process": { 00:17:03.543 "type": "rebuild", 00:17:03.543 "target": "spare", 00:17:03.543 "progress": { 00:17:03.543 "blocks": 2816, 00:17:03.543 "percent": 35 00:17:03.543 } 00:17:03.543 }, 00:17:03.543 "base_bdevs_list": [ 00:17:03.543 { 00:17:03.543 "name": "spare", 00:17:03.543 "uuid": "7e403b11-0a33-5447-99f0-8d169856eb57", 00:17:03.543 "is_configured": true, 00:17:03.543 "data_offset": 256, 00:17:03.543 "data_size": 7936 00:17:03.543 }, 00:17:03.543 { 00:17:03.543 "name": "BaseBdev2", 00:17:03.543 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:03.543 "is_configured": true, 00:17:03.543 "data_offset": 256, 00:17:03.543 "data_size": 7936 00:17:03.543 } 00:17:03.543 ] 00:17:03.543 }' 00:17:03.543 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.543 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.543 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.801 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.801 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.738 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.738 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.738 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.738 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.738 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.738 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.738 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.738 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.738 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.739 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.739 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.739 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.739 "name": "raid_bdev1", 00:17:04.739 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:04.739 "strip_size_kb": 0, 00:17:04.739 "state": "online", 00:17:04.739 "raid_level": "raid1", 00:17:04.739 "superblock": true, 00:17:04.739 "num_base_bdevs": 2, 00:17:04.739 "num_base_bdevs_discovered": 2, 00:17:04.739 "num_base_bdevs_operational": 2, 00:17:04.739 "process": { 00:17:04.739 "type": "rebuild", 00:17:04.739 "target": "spare", 00:17:04.739 "progress": { 00:17:04.739 "blocks": 5888, 00:17:04.739 "percent": 74 00:17:04.739 } 00:17:04.739 }, 00:17:04.739 "base_bdevs_list": [ 00:17:04.739 { 00:17:04.739 "name": "spare", 00:17:04.739 "uuid": "7e403b11-0a33-5447-99f0-8d169856eb57", 00:17:04.739 "is_configured": true, 00:17:04.739 "data_offset": 256, 00:17:04.739 "data_size": 7936 00:17:04.739 }, 00:17:04.739 { 00:17:04.739 "name": "BaseBdev2", 00:17:04.739 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:04.739 "is_configured": true, 00:17:04.739 "data_offset": 256, 00:17:04.739 "data_size": 7936 00:17:04.739 } 00:17:04.739 ] 00:17:04.739 }' 00:17:04.739 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.739 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.739 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.739 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.739 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.676 [2024-11-26 13:29:53.921153] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:05.676 [2024-11-26 13:29:53.921403] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:05.676 [2024-11-26 13:29:53.921535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.935 "name": "raid_bdev1", 00:17:05.935 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:05.935 "strip_size_kb": 0, 00:17:05.935 "state": "online", 00:17:05.935 "raid_level": "raid1", 00:17:05.935 "superblock": true, 00:17:05.935 "num_base_bdevs": 2, 00:17:05.935 "num_base_bdevs_discovered": 2, 00:17:05.935 "num_base_bdevs_operational": 2, 00:17:05.935 "base_bdevs_list": [ 00:17:05.935 { 00:17:05.935 "name": "spare", 00:17:05.935 "uuid": "7e403b11-0a33-5447-99f0-8d169856eb57", 00:17:05.935 "is_configured": true, 00:17:05.935 "data_offset": 256, 00:17:05.935 "data_size": 7936 00:17:05.935 }, 00:17:05.935 { 00:17:05.935 "name": "BaseBdev2", 00:17:05.935 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:05.935 "is_configured": true, 00:17:05.935 "data_offset": 256, 00:17:05.935 "data_size": 7936 00:17:05.935 } 00:17:05.935 ] 00:17:05.935 }' 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.935 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.195 "name": "raid_bdev1", 00:17:06.195 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:06.195 "strip_size_kb": 0, 00:17:06.195 "state": "online", 00:17:06.195 "raid_level": "raid1", 00:17:06.195 "superblock": true, 00:17:06.195 "num_base_bdevs": 2, 00:17:06.195 "num_base_bdevs_discovered": 2, 00:17:06.195 "num_base_bdevs_operational": 2, 00:17:06.195 "base_bdevs_list": [ 00:17:06.195 { 00:17:06.195 "name": "spare", 00:17:06.195 "uuid": "7e403b11-0a33-5447-99f0-8d169856eb57", 00:17:06.195 "is_configured": true, 00:17:06.195 "data_offset": 256, 00:17:06.195 "data_size": 7936 00:17:06.195 }, 00:17:06.195 { 00:17:06.195 "name": "BaseBdev2", 00:17:06.195 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:06.195 "is_configured": true, 00:17:06.195 "data_offset": 256, 00:17:06.195 "data_size": 7936 00:17:06.195 } 00:17:06.195 ] 00:17:06.195 }' 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.195 "name": "raid_bdev1", 00:17:06.195 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:06.195 "strip_size_kb": 0, 00:17:06.195 "state": "online", 00:17:06.195 "raid_level": "raid1", 00:17:06.195 "superblock": true, 00:17:06.195 "num_base_bdevs": 2, 00:17:06.195 "num_base_bdevs_discovered": 2, 00:17:06.195 "num_base_bdevs_operational": 2, 00:17:06.195 "base_bdevs_list": [ 00:17:06.195 { 00:17:06.195 "name": "spare", 00:17:06.195 "uuid": "7e403b11-0a33-5447-99f0-8d169856eb57", 00:17:06.195 "is_configured": true, 00:17:06.195 "data_offset": 256, 00:17:06.195 "data_size": 7936 00:17:06.195 }, 00:17:06.195 { 00:17:06.195 "name": "BaseBdev2", 00:17:06.195 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:06.195 "is_configured": true, 00:17:06.195 "data_offset": 256, 00:17:06.195 "data_size": 7936 00:17:06.195 } 00:17:06.195 ] 00:17:06.195 }' 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.195 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.763 [2024-11-26 13:29:55.137548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.763 [2024-11-26 13:29:55.137716] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.763 [2024-11-26 13:29:55.137889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.763 [2024-11-26 13:29:55.137975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.763 [2024-11-26 13:29:55.137991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:06.763 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:07.022 /dev/nbd0 00:17:07.022 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:07.022 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:07.022 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:07.022 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:07.022 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:07.022 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:07.022 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:07.022 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:07.022 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.022 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.022 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.022 1+0 records in 00:17:07.022 1+0 records out 00:17:07.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240156 s, 17.1 MB/s 00:17:07.022 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.023 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:07.023 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.023 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.023 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:07.023 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.023 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.023 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:07.282 /dev/nbd1 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.282 1+0 records in 00:17:07.282 1+0 records out 00:17:07.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319733 s, 12.8 MB/s 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.282 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:07.541 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:07.541 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.541 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:07.541 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:07.541 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:07.541 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.541 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:07.800 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:07.800 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:07.800 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:07.800 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.800 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.800 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:07.800 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:07.800 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.800 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.800 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.059 [2024-11-26 13:29:56.508388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:08.059 [2024-11-26 13:29:56.508569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.059 [2024-11-26 13:29:56.508636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:08.059 [2024-11-26 13:29:56.508858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.059 [2024-11-26 13:29:56.511179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.059 [2024-11-26 13:29:56.511355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:08.059 [2024-11-26 13:29:56.511557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:08.059 [2024-11-26 13:29:56.511749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.059 [2024-11-26 13:29:56.512042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.059 spare 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.059 [2024-11-26 13:29:56.612317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:08.059 [2024-11-26 13:29:56.612450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:08.059 [2024-11-26 13:29:56.612750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:08.059 [2024-11-26 13:29:56.613024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:08.059 [2024-11-26 13:29:56.613048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:08.059 [2024-11-26 13:29:56.613248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.059 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.318 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.318 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.318 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.318 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.318 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.318 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.318 "name": "raid_bdev1", 00:17:08.318 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:08.318 "strip_size_kb": 0, 00:17:08.318 "state": "online", 00:17:08.318 "raid_level": "raid1", 00:17:08.318 "superblock": true, 00:17:08.318 "num_base_bdevs": 2, 00:17:08.318 "num_base_bdevs_discovered": 2, 00:17:08.318 "num_base_bdevs_operational": 2, 00:17:08.318 "base_bdevs_list": [ 00:17:08.318 { 00:17:08.318 "name": "spare", 00:17:08.318 "uuid": "7e403b11-0a33-5447-99f0-8d169856eb57", 00:17:08.319 "is_configured": true, 00:17:08.319 "data_offset": 256, 00:17:08.319 "data_size": 7936 00:17:08.319 }, 00:17:08.319 { 00:17:08.319 "name": "BaseBdev2", 00:17:08.319 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:08.319 "is_configured": true, 00:17:08.319 "data_offset": 256, 00:17:08.319 "data_size": 7936 00:17:08.319 } 00:17:08.319 ] 00:17:08.319 }' 00:17:08.319 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.319 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.577 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.577 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.577 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.577 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.577 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.577 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.577 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.577 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.577 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.836 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.836 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.836 "name": "raid_bdev1", 00:17:08.836 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:08.836 "strip_size_kb": 0, 00:17:08.836 "state": "online", 00:17:08.836 "raid_level": "raid1", 00:17:08.836 "superblock": true, 00:17:08.836 "num_base_bdevs": 2, 00:17:08.836 "num_base_bdevs_discovered": 2, 00:17:08.836 "num_base_bdevs_operational": 2, 00:17:08.836 "base_bdevs_list": [ 00:17:08.836 { 00:17:08.836 "name": "spare", 00:17:08.836 "uuid": "7e403b11-0a33-5447-99f0-8d169856eb57", 00:17:08.836 "is_configured": true, 00:17:08.836 "data_offset": 256, 00:17:08.836 "data_size": 7936 00:17:08.836 }, 00:17:08.836 { 00:17:08.836 "name": "BaseBdev2", 00:17:08.836 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:08.836 "is_configured": true, 00:17:08.836 "data_offset": 256, 00:17:08.836 "data_size": 7936 00:17:08.836 } 00:17:08.836 ] 00:17:08.836 }' 00:17:08.836 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.836 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.836 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.836 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.837 [2024-11-26 13:29:57.345348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.837 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.096 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.096 "name": "raid_bdev1", 00:17:09.096 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:09.096 "strip_size_kb": 0, 00:17:09.096 "state": "online", 00:17:09.096 "raid_level": "raid1", 00:17:09.096 "superblock": true, 00:17:09.096 "num_base_bdevs": 2, 00:17:09.096 "num_base_bdevs_discovered": 1, 00:17:09.096 "num_base_bdevs_operational": 1, 00:17:09.096 "base_bdevs_list": [ 00:17:09.096 { 00:17:09.096 "name": null, 00:17:09.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.096 "is_configured": false, 00:17:09.096 "data_offset": 0, 00:17:09.096 "data_size": 7936 00:17:09.096 }, 00:17:09.096 { 00:17:09.096 "name": "BaseBdev2", 00:17:09.096 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:09.096 "is_configured": true, 00:17:09.096 "data_offset": 256, 00:17:09.096 "data_size": 7936 00:17:09.096 } 00:17:09.096 ] 00:17:09.096 }' 00:17:09.096 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.096 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.355 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:09.355 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.355 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.355 [2024-11-26 13:29:57.873473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.355 [2024-11-26 13:29:57.873755] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:09.355 [2024-11-26 13:29:57.873790] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:09.355 [2024-11-26 13:29:57.873829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.355 [2024-11-26 13:29:57.886286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:09.355 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.355 13:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:09.355 [2024-11-26 13:29:57.888299] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.734 13:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.734 13:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.734 13:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.734 13:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.734 13:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.734 13:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.734 13:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.734 13:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.734 13:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.734 13:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.734 13:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.734 "name": "raid_bdev1", 00:17:10.734 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:10.734 "strip_size_kb": 0, 00:17:10.734 "state": "online", 00:17:10.734 "raid_level": "raid1", 00:17:10.734 "superblock": true, 00:17:10.734 "num_base_bdevs": 2, 00:17:10.734 "num_base_bdevs_discovered": 2, 00:17:10.734 "num_base_bdevs_operational": 2, 00:17:10.734 "process": { 00:17:10.734 "type": "rebuild", 00:17:10.734 "target": "spare", 00:17:10.734 "progress": { 00:17:10.734 "blocks": 2560, 00:17:10.734 "percent": 32 00:17:10.734 } 00:17:10.734 }, 00:17:10.734 "base_bdevs_list": [ 00:17:10.734 { 00:17:10.734 "name": "spare", 00:17:10.734 "uuid": "7e403b11-0a33-5447-99f0-8d169856eb57", 00:17:10.734 "is_configured": true, 00:17:10.734 "data_offset": 256, 00:17:10.734 "data_size": 7936 00:17:10.734 }, 00:17:10.734 { 00:17:10.734 "name": "BaseBdev2", 00:17:10.734 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:10.734 "is_configured": true, 00:17:10.734 "data_offset": 256, 00:17:10.734 "data_size": 7936 00:17:10.734 } 00:17:10.734 ] 00:17:10.734 }' 00:17:10.734 13:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.734 13:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.734 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.734 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.734 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:10.734 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.734 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.734 [2024-11-26 13:29:59.038058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.734 [2024-11-26 13:29:59.095626] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:10.734 [2024-11-26 13:29:59.095821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.734 [2024-11-26 13:29:59.095847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.734 [2024-11-26 13:29:59.095861] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:10.734 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.735 "name": "raid_bdev1", 00:17:10.735 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:10.735 "strip_size_kb": 0, 00:17:10.735 "state": "online", 00:17:10.735 "raid_level": "raid1", 00:17:10.735 "superblock": true, 00:17:10.735 "num_base_bdevs": 2, 00:17:10.735 "num_base_bdevs_discovered": 1, 00:17:10.735 "num_base_bdevs_operational": 1, 00:17:10.735 "base_bdevs_list": [ 00:17:10.735 { 00:17:10.735 "name": null, 00:17:10.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.735 "is_configured": false, 00:17:10.735 "data_offset": 0, 00:17:10.735 "data_size": 7936 00:17:10.735 }, 00:17:10.735 { 00:17:10.735 "name": "BaseBdev2", 00:17:10.735 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:10.735 "is_configured": true, 00:17:10.735 "data_offset": 256, 00:17:10.735 "data_size": 7936 00:17:10.735 } 00:17:10.735 ] 00:17:10.735 }' 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.735 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.303 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:11.303 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.303 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.303 [2024-11-26 13:29:59.631855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:11.303 [2024-11-26 13:29:59.632068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.303 [2024-11-26 13:29:59.632133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:11.303 [2024-11-26 13:29:59.632156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.303 [2024-11-26 13:29:59.632680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.303 [2024-11-26 13:29:59.632726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:11.303 [2024-11-26 13:29:59.632858] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:11.303 [2024-11-26 13:29:59.632881] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:11.303 [2024-11-26 13:29:59.632894] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:11.303 [2024-11-26 13:29:59.632923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:11.303 [2024-11-26 13:29:59.643684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:11.303 spare 00:17:11.303 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.303 13:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:11.303 [2024-11-26 13:29:59.645661] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:12.239 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.239 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.239 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.239 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.240 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.240 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.240 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.240 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.240 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.240 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.240 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.240 "name": "raid_bdev1", 00:17:12.240 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:12.240 "strip_size_kb": 0, 00:17:12.240 "state": "online", 00:17:12.240 "raid_level": "raid1", 00:17:12.240 "superblock": true, 00:17:12.240 "num_base_bdevs": 2, 00:17:12.240 "num_base_bdevs_discovered": 2, 00:17:12.240 "num_base_bdevs_operational": 2, 00:17:12.240 "process": { 00:17:12.240 "type": "rebuild", 00:17:12.240 "target": "spare", 00:17:12.240 "progress": { 00:17:12.240 "blocks": 2560, 00:17:12.240 "percent": 32 00:17:12.240 } 00:17:12.240 }, 00:17:12.240 "base_bdevs_list": [ 00:17:12.240 { 00:17:12.240 "name": "spare", 00:17:12.240 "uuid": "7e403b11-0a33-5447-99f0-8d169856eb57", 00:17:12.240 "is_configured": true, 00:17:12.240 "data_offset": 256, 00:17:12.240 "data_size": 7936 00:17:12.240 }, 00:17:12.240 { 00:17:12.240 "name": "BaseBdev2", 00:17:12.240 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:12.240 "is_configured": true, 00:17:12.240 "data_offset": 256, 00:17:12.240 "data_size": 7936 00:17:12.240 } 00:17:12.240 ] 00:17:12.240 }' 00:17:12.240 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.240 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.240 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.240 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.240 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:12.240 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.240 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.240 [2024-11-26 13:30:00.795722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:12.498 [2024-11-26 13:30:00.852181] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:12.498 [2024-11-26 13:30:00.852258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.498 [2024-11-26 13:30:00.852283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:12.498 [2024-11-26 13:30:00.852294] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.498 "name": "raid_bdev1", 00:17:12.498 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:12.498 "strip_size_kb": 0, 00:17:12.498 "state": "online", 00:17:12.498 "raid_level": "raid1", 00:17:12.498 "superblock": true, 00:17:12.498 "num_base_bdevs": 2, 00:17:12.498 "num_base_bdevs_discovered": 1, 00:17:12.498 "num_base_bdevs_operational": 1, 00:17:12.498 "base_bdevs_list": [ 00:17:12.498 { 00:17:12.498 "name": null, 00:17:12.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.498 "is_configured": false, 00:17:12.498 "data_offset": 0, 00:17:12.498 "data_size": 7936 00:17:12.498 }, 00:17:12.498 { 00:17:12.498 "name": "BaseBdev2", 00:17:12.498 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:12.498 "is_configured": true, 00:17:12.498 "data_offset": 256, 00:17:12.498 "data_size": 7936 00:17:12.498 } 00:17:12.498 ] 00:17:12.498 }' 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.498 13:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.094 "name": "raid_bdev1", 00:17:13.094 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:13.094 "strip_size_kb": 0, 00:17:13.094 "state": "online", 00:17:13.094 "raid_level": "raid1", 00:17:13.094 "superblock": true, 00:17:13.094 "num_base_bdevs": 2, 00:17:13.094 "num_base_bdevs_discovered": 1, 00:17:13.094 "num_base_bdevs_operational": 1, 00:17:13.094 "base_bdevs_list": [ 00:17:13.094 { 00:17:13.094 "name": null, 00:17:13.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.094 "is_configured": false, 00:17:13.094 "data_offset": 0, 00:17:13.094 "data_size": 7936 00:17:13.094 }, 00:17:13.094 { 00:17:13.094 "name": "BaseBdev2", 00:17:13.094 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:13.094 "is_configured": true, 00:17:13.094 "data_offset": 256, 00:17:13.094 "data_size": 7936 00:17:13.094 } 00:17:13.094 ] 00:17:13.094 }' 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.094 [2024-11-26 13:30:01.575825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:13.094 [2024-11-26 13:30:01.575876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.094 [2024-11-26 13:30:01.575903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:13.094 [2024-11-26 13:30:01.575925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.094 [2024-11-26 13:30:01.576389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.094 [2024-11-26 13:30:01.576412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:13.094 [2024-11-26 13:30:01.576493] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:13.094 [2024-11-26 13:30:01.576511] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:13.094 [2024-11-26 13:30:01.576526] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:13.094 [2024-11-26 13:30:01.576536] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:13.094 BaseBdev1 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.094 13:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:14.033 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:14.033 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.033 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.033 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.033 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.033 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:14.033 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.033 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.033 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.033 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.033 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.033 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.033 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.033 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.292 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.292 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.292 "name": "raid_bdev1", 00:17:14.292 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:14.292 "strip_size_kb": 0, 00:17:14.292 "state": "online", 00:17:14.292 "raid_level": "raid1", 00:17:14.292 "superblock": true, 00:17:14.292 "num_base_bdevs": 2, 00:17:14.292 "num_base_bdevs_discovered": 1, 00:17:14.292 "num_base_bdevs_operational": 1, 00:17:14.292 "base_bdevs_list": [ 00:17:14.292 { 00:17:14.292 "name": null, 00:17:14.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.292 "is_configured": false, 00:17:14.292 "data_offset": 0, 00:17:14.292 "data_size": 7936 00:17:14.292 }, 00:17:14.292 { 00:17:14.292 "name": "BaseBdev2", 00:17:14.292 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:14.292 "is_configured": true, 00:17:14.292 "data_offset": 256, 00:17:14.292 "data_size": 7936 00:17:14.292 } 00:17:14.292 ] 00:17:14.292 }' 00:17:14.292 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.292 13:30:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.552 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.552 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.552 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.552 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.552 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.552 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.552 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.552 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.552 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.552 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.812 "name": "raid_bdev1", 00:17:14.812 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:14.812 "strip_size_kb": 0, 00:17:14.812 "state": "online", 00:17:14.812 "raid_level": "raid1", 00:17:14.812 "superblock": true, 00:17:14.812 "num_base_bdevs": 2, 00:17:14.812 "num_base_bdevs_discovered": 1, 00:17:14.812 "num_base_bdevs_operational": 1, 00:17:14.812 "base_bdevs_list": [ 00:17:14.812 { 00:17:14.812 "name": null, 00:17:14.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.812 "is_configured": false, 00:17:14.812 "data_offset": 0, 00:17:14.812 "data_size": 7936 00:17:14.812 }, 00:17:14.812 { 00:17:14.812 "name": "BaseBdev2", 00:17:14.812 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:14.812 "is_configured": true, 00:17:14.812 "data_offset": 256, 00:17:14.812 "data_size": 7936 00:17:14.812 } 00:17:14.812 ] 00:17:14.812 }' 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.812 [2024-11-26 13:30:03.260187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.812 [2024-11-26 13:30:03.260323] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:14.812 [2024-11-26 13:30:03.260345] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:14.812 request: 00:17:14.812 { 00:17:14.812 "base_bdev": "BaseBdev1", 00:17:14.812 "raid_bdev": "raid_bdev1", 00:17:14.812 "method": "bdev_raid_add_base_bdev", 00:17:14.812 "req_id": 1 00:17:14.812 } 00:17:14.812 Got JSON-RPC error response 00:17:14.812 response: 00:17:14.812 { 00:17:14.812 "code": -22, 00:17:14.812 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:14.812 } 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.812 13:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.822 "name": "raid_bdev1", 00:17:15.822 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:15.822 "strip_size_kb": 0, 00:17:15.822 "state": "online", 00:17:15.822 "raid_level": "raid1", 00:17:15.822 "superblock": true, 00:17:15.822 "num_base_bdevs": 2, 00:17:15.822 "num_base_bdevs_discovered": 1, 00:17:15.822 "num_base_bdevs_operational": 1, 00:17:15.822 "base_bdevs_list": [ 00:17:15.822 { 00:17:15.822 "name": null, 00:17:15.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.822 "is_configured": false, 00:17:15.822 "data_offset": 0, 00:17:15.822 "data_size": 7936 00:17:15.822 }, 00:17:15.822 { 00:17:15.822 "name": "BaseBdev2", 00:17:15.822 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:15.822 "is_configured": true, 00:17:15.822 "data_offset": 256, 00:17:15.822 "data_size": 7936 00:17:15.822 } 00:17:15.822 ] 00:17:15.822 }' 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.822 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.401 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:16.401 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.402 "name": "raid_bdev1", 00:17:16.402 "uuid": "b3ffc6cb-632c-4517-9982-bfbc33589434", 00:17:16.402 "strip_size_kb": 0, 00:17:16.402 "state": "online", 00:17:16.402 "raid_level": "raid1", 00:17:16.402 "superblock": true, 00:17:16.402 "num_base_bdevs": 2, 00:17:16.402 "num_base_bdevs_discovered": 1, 00:17:16.402 "num_base_bdevs_operational": 1, 00:17:16.402 "base_bdevs_list": [ 00:17:16.402 { 00:17:16.402 "name": null, 00:17:16.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.402 "is_configured": false, 00:17:16.402 "data_offset": 0, 00:17:16.402 "data_size": 7936 00:17:16.402 }, 00:17:16.402 { 00:17:16.402 "name": "BaseBdev2", 00:17:16.402 "uuid": "2a2f50cd-37da-5e0d-af12-e53744d08acf", 00:17:16.402 "is_configured": true, 00:17:16.402 "data_offset": 256, 00:17:16.402 "data_size": 7936 00:17:16.402 } 00:17:16.402 ] 00:17:16.402 }' 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86152 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86152 ']' 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86152 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.402 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86152 00:17:16.661 killing process with pid 86152 00:17:16.661 Received shutdown signal, test time was about 60.000000 seconds 00:17:16.661 00:17:16.661 Latency(us) 00:17:16.661 [2024-11-26T13:30:05.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.661 [2024-11-26T13:30:05.231Z] =================================================================================================================== 00:17:16.661 [2024-11-26T13:30:05.231Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:16.661 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.661 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.661 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86152' 00:17:16.661 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86152 00:17:16.661 [2024-11-26 13:30:04.983650] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:16.661 [2024-11-26 13:30:04.983743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.661 13:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86152 00:17:16.661 [2024-11-26 13:30:04.983789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.661 [2024-11-26 13:30:04.983805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:16.661 [2024-11-26 13:30:05.186015] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.599 ************************************ 00:17:17.599 END TEST raid_rebuild_test_sb_4k 00:17:17.599 ************************************ 00:17:17.599 13:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:17.599 00:17:17.599 real 0m20.800s 00:17:17.599 user 0m28.269s 00:17:17.599 sys 0m2.334s 00:17:17.599 13:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.599 13:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.599 13:30:06 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:17.599 13:30:06 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:17.599 13:30:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:17.599 13:30:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.599 13:30:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.599 ************************************ 00:17:17.599 START TEST raid_state_function_test_sb_md_separate 00:17:17.599 ************************************ 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=86849 00:17:17.599 Process raid pid: 86849 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86849' 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 86849 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 86849 ']' 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.599 13:30:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.858 [2024-11-26 13:30:06.194930] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:17:17.858 [2024-11-26 13:30:06.195113] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.858 [2024-11-26 13:30:06.370862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.117 [2024-11-26 13:30:06.469643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.117 [2024-11-26 13:30:06.639270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.117 [2024-11-26 13:30:06.639312] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.685 [2024-11-26 13:30:07.144105] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:18.685 [2024-11-26 13:30:07.144167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:18.685 [2024-11-26 13:30:07.144183] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.685 [2024-11-26 13:30:07.144197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.685 "name": "Existed_Raid", 00:17:18.685 "uuid": "0160942e-4b30-42ee-9a69-f691d9ee83c2", 00:17:18.685 "strip_size_kb": 0, 00:17:18.685 "state": "configuring", 00:17:18.685 "raid_level": "raid1", 00:17:18.685 "superblock": true, 00:17:18.685 "num_base_bdevs": 2, 00:17:18.685 "num_base_bdevs_discovered": 0, 00:17:18.685 "num_base_bdevs_operational": 2, 00:17:18.685 "base_bdevs_list": [ 00:17:18.685 { 00:17:18.685 "name": "BaseBdev1", 00:17:18.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.685 "is_configured": false, 00:17:18.685 "data_offset": 0, 00:17:18.685 "data_size": 0 00:17:18.685 }, 00:17:18.685 { 00:17:18.685 "name": "BaseBdev2", 00:17:18.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.685 "is_configured": false, 00:17:18.685 "data_offset": 0, 00:17:18.685 "data_size": 0 00:17:18.685 } 00:17:18.685 ] 00:17:18.685 }' 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.685 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.255 [2024-11-26 13:30:07.656138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.255 [2024-11-26 13:30:07.656321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.255 [2024-11-26 13:30:07.664140] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.255 [2024-11-26 13:30:07.664181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.255 [2024-11-26 13:30:07.664193] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.255 [2024-11-26 13:30:07.664208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.255 [2024-11-26 13:30:07.703503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.255 BaseBdev1 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.255 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.255 [ 00:17:19.255 { 00:17:19.255 "name": "BaseBdev1", 00:17:19.255 "aliases": [ 00:17:19.255 "86ed47c2-fa4d-425a-9cf1-925b51c2b39e" 00:17:19.255 ], 00:17:19.255 "product_name": "Malloc disk", 00:17:19.255 "block_size": 4096, 00:17:19.255 "num_blocks": 8192, 00:17:19.255 "uuid": "86ed47c2-fa4d-425a-9cf1-925b51c2b39e", 00:17:19.255 "md_size": 32, 00:17:19.255 "md_interleave": false, 00:17:19.255 "dif_type": 0, 00:17:19.255 "assigned_rate_limits": { 00:17:19.255 "rw_ios_per_sec": 0, 00:17:19.255 "rw_mbytes_per_sec": 0, 00:17:19.255 "r_mbytes_per_sec": 0, 00:17:19.255 "w_mbytes_per_sec": 0 00:17:19.255 }, 00:17:19.255 "claimed": true, 00:17:19.255 "claim_type": "exclusive_write", 00:17:19.255 "zoned": false, 00:17:19.255 "supported_io_types": { 00:17:19.255 "read": true, 00:17:19.255 "write": true, 00:17:19.255 "unmap": true, 00:17:19.255 "flush": true, 00:17:19.255 "reset": true, 00:17:19.255 "nvme_admin": false, 00:17:19.255 "nvme_io": false, 00:17:19.255 "nvme_io_md": false, 00:17:19.255 "write_zeroes": true, 00:17:19.255 "zcopy": true, 00:17:19.255 "get_zone_info": false, 00:17:19.255 "zone_management": false, 00:17:19.255 "zone_append": false, 00:17:19.255 "compare": false, 00:17:19.255 "compare_and_write": false, 00:17:19.255 "abort": true, 00:17:19.255 "seek_hole": false, 00:17:19.255 "seek_data": false, 00:17:19.255 "copy": true, 00:17:19.255 "nvme_iov_md": false 00:17:19.255 }, 00:17:19.255 "memory_domains": [ 00:17:19.255 { 00:17:19.255 "dma_device_id": "system", 00:17:19.255 "dma_device_type": 1 00:17:19.255 }, 00:17:19.255 { 00:17:19.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.255 "dma_device_type": 2 00:17:19.256 } 00:17:19.256 ], 00:17:19.256 "driver_specific": {} 00:17:19.256 } 00:17:19.256 ] 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.256 "name": "Existed_Raid", 00:17:19.256 "uuid": "417bb6b4-9164-44b6-a84b-e80573d27f40", 00:17:19.256 "strip_size_kb": 0, 00:17:19.256 "state": "configuring", 00:17:19.256 "raid_level": "raid1", 00:17:19.256 "superblock": true, 00:17:19.256 "num_base_bdevs": 2, 00:17:19.256 "num_base_bdevs_discovered": 1, 00:17:19.256 "num_base_bdevs_operational": 2, 00:17:19.256 "base_bdevs_list": [ 00:17:19.256 { 00:17:19.256 "name": "BaseBdev1", 00:17:19.256 "uuid": "86ed47c2-fa4d-425a-9cf1-925b51c2b39e", 00:17:19.256 "is_configured": true, 00:17:19.256 "data_offset": 256, 00:17:19.256 "data_size": 7936 00:17:19.256 }, 00:17:19.256 { 00:17:19.256 "name": "BaseBdev2", 00:17:19.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.256 "is_configured": false, 00:17:19.256 "data_offset": 0, 00:17:19.256 "data_size": 0 00:17:19.256 } 00:17:19.256 ] 00:17:19.256 }' 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.256 13:30:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.823 [2024-11-26 13:30:08.263657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.823 [2024-11-26 13:30:08.263695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.823 [2024-11-26 13:30:08.271698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.823 [2024-11-26 13:30:08.273588] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.823 [2024-11-26 13:30:08.273628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.823 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.823 "name": "Existed_Raid", 00:17:19.823 "uuid": "f714679e-0a49-46c4-9594-ff3303ac0301", 00:17:19.824 "strip_size_kb": 0, 00:17:19.824 "state": "configuring", 00:17:19.824 "raid_level": "raid1", 00:17:19.824 "superblock": true, 00:17:19.824 "num_base_bdevs": 2, 00:17:19.824 "num_base_bdevs_discovered": 1, 00:17:19.824 "num_base_bdevs_operational": 2, 00:17:19.824 "base_bdevs_list": [ 00:17:19.824 { 00:17:19.824 "name": "BaseBdev1", 00:17:19.824 "uuid": "86ed47c2-fa4d-425a-9cf1-925b51c2b39e", 00:17:19.824 "is_configured": true, 00:17:19.824 "data_offset": 256, 00:17:19.824 "data_size": 7936 00:17:19.824 }, 00:17:19.824 { 00:17:19.824 "name": "BaseBdev2", 00:17:19.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.824 "is_configured": false, 00:17:19.824 "data_offset": 0, 00:17:19.824 "data_size": 0 00:17:19.824 } 00:17:19.824 ] 00:17:19.824 }' 00:17:19.824 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.824 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.391 [2024-11-26 13:30:08.836832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.391 [2024-11-26 13:30:08.837218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:20.391 [2024-11-26 13:30:08.837377] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:20.391 [2024-11-26 13:30:08.837524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:20.391 BaseBdev2 00:17:20.391 [2024-11-26 13:30:08.837814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:20.391 [2024-11-26 13:30:08.837834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:20.391 [2024-11-26 13:30:08.837936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.391 [ 00:17:20.391 { 00:17:20.391 "name": "BaseBdev2", 00:17:20.391 "aliases": [ 00:17:20.391 "cd8312c4-9d14-4c09-8449-875696dce5a8" 00:17:20.391 ], 00:17:20.391 "product_name": "Malloc disk", 00:17:20.391 "block_size": 4096, 00:17:20.391 "num_blocks": 8192, 00:17:20.391 "uuid": "cd8312c4-9d14-4c09-8449-875696dce5a8", 00:17:20.391 "md_size": 32, 00:17:20.391 "md_interleave": false, 00:17:20.391 "dif_type": 0, 00:17:20.391 "assigned_rate_limits": { 00:17:20.391 "rw_ios_per_sec": 0, 00:17:20.391 "rw_mbytes_per_sec": 0, 00:17:20.391 "r_mbytes_per_sec": 0, 00:17:20.391 "w_mbytes_per_sec": 0 00:17:20.391 }, 00:17:20.391 "claimed": true, 00:17:20.391 "claim_type": "exclusive_write", 00:17:20.391 "zoned": false, 00:17:20.391 "supported_io_types": { 00:17:20.391 "read": true, 00:17:20.391 "write": true, 00:17:20.391 "unmap": true, 00:17:20.391 "flush": true, 00:17:20.391 "reset": true, 00:17:20.391 "nvme_admin": false, 00:17:20.391 "nvme_io": false, 00:17:20.391 "nvme_io_md": false, 00:17:20.391 "write_zeroes": true, 00:17:20.391 "zcopy": true, 00:17:20.391 "get_zone_info": false, 00:17:20.391 "zone_management": false, 00:17:20.391 "zone_append": false, 00:17:20.391 "compare": false, 00:17:20.391 "compare_and_write": false, 00:17:20.391 "abort": true, 00:17:20.391 "seek_hole": false, 00:17:20.391 "seek_data": false, 00:17:20.391 "copy": true, 00:17:20.391 "nvme_iov_md": false 00:17:20.391 }, 00:17:20.391 "memory_domains": [ 00:17:20.391 { 00:17:20.391 "dma_device_id": "system", 00:17:20.391 "dma_device_type": 1 00:17:20.391 }, 00:17:20.391 { 00:17:20.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.391 "dma_device_type": 2 00:17:20.391 } 00:17:20.391 ], 00:17:20.391 "driver_specific": {} 00:17:20.391 } 00:17:20.391 ] 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.391 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.391 "name": "Existed_Raid", 00:17:20.391 "uuid": "f714679e-0a49-46c4-9594-ff3303ac0301", 00:17:20.391 "strip_size_kb": 0, 00:17:20.391 "state": "online", 00:17:20.391 "raid_level": "raid1", 00:17:20.391 "superblock": true, 00:17:20.391 "num_base_bdevs": 2, 00:17:20.391 "num_base_bdevs_discovered": 2, 00:17:20.391 "num_base_bdevs_operational": 2, 00:17:20.391 "base_bdevs_list": [ 00:17:20.392 { 00:17:20.392 "name": "BaseBdev1", 00:17:20.392 "uuid": "86ed47c2-fa4d-425a-9cf1-925b51c2b39e", 00:17:20.392 "is_configured": true, 00:17:20.392 "data_offset": 256, 00:17:20.392 "data_size": 7936 00:17:20.392 }, 00:17:20.392 { 00:17:20.392 "name": "BaseBdev2", 00:17:20.392 "uuid": "cd8312c4-9d14-4c09-8449-875696dce5a8", 00:17:20.392 "is_configured": true, 00:17:20.392 "data_offset": 256, 00:17:20.392 "data_size": 7936 00:17:20.392 } 00:17:20.392 ] 00:17:20.392 }' 00:17:20.392 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.392 13:30:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.959 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:20.959 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:20.959 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:20.959 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:20.959 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:20.959 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:20.959 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:20.959 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.959 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.959 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:20.959 [2024-11-26 13:30:09.393287] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.959 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.959 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:20.959 "name": "Existed_Raid", 00:17:20.959 "aliases": [ 00:17:20.959 "f714679e-0a49-46c4-9594-ff3303ac0301" 00:17:20.959 ], 00:17:20.959 "product_name": "Raid Volume", 00:17:20.959 "block_size": 4096, 00:17:20.959 "num_blocks": 7936, 00:17:20.959 "uuid": "f714679e-0a49-46c4-9594-ff3303ac0301", 00:17:20.959 "md_size": 32, 00:17:20.959 "md_interleave": false, 00:17:20.959 "dif_type": 0, 00:17:20.959 "assigned_rate_limits": { 00:17:20.959 "rw_ios_per_sec": 0, 00:17:20.959 "rw_mbytes_per_sec": 0, 00:17:20.959 "r_mbytes_per_sec": 0, 00:17:20.959 "w_mbytes_per_sec": 0 00:17:20.959 }, 00:17:20.959 "claimed": false, 00:17:20.959 "zoned": false, 00:17:20.959 "supported_io_types": { 00:17:20.959 "read": true, 00:17:20.959 "write": true, 00:17:20.959 "unmap": false, 00:17:20.959 "flush": false, 00:17:20.959 "reset": true, 00:17:20.959 "nvme_admin": false, 00:17:20.959 "nvme_io": false, 00:17:20.959 "nvme_io_md": false, 00:17:20.959 "write_zeroes": true, 00:17:20.959 "zcopy": false, 00:17:20.959 "get_zone_info": false, 00:17:20.959 "zone_management": false, 00:17:20.960 "zone_append": false, 00:17:20.960 "compare": false, 00:17:20.960 "compare_and_write": false, 00:17:20.960 "abort": false, 00:17:20.960 "seek_hole": false, 00:17:20.960 "seek_data": false, 00:17:20.960 "copy": false, 00:17:20.960 "nvme_iov_md": false 00:17:20.960 }, 00:17:20.960 "memory_domains": [ 00:17:20.960 { 00:17:20.960 "dma_device_id": "system", 00:17:20.960 "dma_device_type": 1 00:17:20.960 }, 00:17:20.960 { 00:17:20.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.960 "dma_device_type": 2 00:17:20.960 }, 00:17:20.960 { 00:17:20.960 "dma_device_id": "system", 00:17:20.960 "dma_device_type": 1 00:17:20.960 }, 00:17:20.960 { 00:17:20.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.960 "dma_device_type": 2 00:17:20.960 } 00:17:20.960 ], 00:17:20.960 "driver_specific": { 00:17:20.960 "raid": { 00:17:20.960 "uuid": "f714679e-0a49-46c4-9594-ff3303ac0301", 00:17:20.960 "strip_size_kb": 0, 00:17:20.960 "state": "online", 00:17:20.960 "raid_level": "raid1", 00:17:20.960 "superblock": true, 00:17:20.960 "num_base_bdevs": 2, 00:17:20.960 "num_base_bdevs_discovered": 2, 00:17:20.960 "num_base_bdevs_operational": 2, 00:17:20.960 "base_bdevs_list": [ 00:17:20.960 { 00:17:20.960 "name": "BaseBdev1", 00:17:20.960 "uuid": "86ed47c2-fa4d-425a-9cf1-925b51c2b39e", 00:17:20.960 "is_configured": true, 00:17:20.960 "data_offset": 256, 00:17:20.960 "data_size": 7936 00:17:20.960 }, 00:17:20.960 { 00:17:20.960 "name": "BaseBdev2", 00:17:20.960 "uuid": "cd8312c4-9d14-4c09-8449-875696dce5a8", 00:17:20.960 "is_configured": true, 00:17:20.960 "data_offset": 256, 00:17:20.960 "data_size": 7936 00:17:20.960 } 00:17:20.960 ] 00:17:20.960 } 00:17:20.960 } 00:17:20.960 }' 00:17:20.960 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:20.960 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:20.960 BaseBdev2' 00:17:20.960 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.220 [2024-11-26 13:30:09.653077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.220 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.479 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.479 "name": "Existed_Raid", 00:17:21.479 "uuid": "f714679e-0a49-46c4-9594-ff3303ac0301", 00:17:21.479 "strip_size_kb": 0, 00:17:21.479 "state": "online", 00:17:21.479 "raid_level": "raid1", 00:17:21.479 "superblock": true, 00:17:21.479 "num_base_bdevs": 2, 00:17:21.479 "num_base_bdevs_discovered": 1, 00:17:21.479 "num_base_bdevs_operational": 1, 00:17:21.479 "base_bdevs_list": [ 00:17:21.479 { 00:17:21.479 "name": null, 00:17:21.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.479 "is_configured": false, 00:17:21.479 "data_offset": 0, 00:17:21.479 "data_size": 7936 00:17:21.479 }, 00:17:21.479 { 00:17:21.479 "name": "BaseBdev2", 00:17:21.479 "uuid": "cd8312c4-9d14-4c09-8449-875696dce5a8", 00:17:21.479 "is_configured": true, 00:17:21.479 "data_offset": 256, 00:17:21.479 "data_size": 7936 00:17:21.479 } 00:17:21.479 ] 00:17:21.479 }' 00:17:21.479 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.479 13:30:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.738 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:21.738 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:21.738 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.738 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.738 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.738 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:21.738 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.738 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:21.738 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:21.738 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:21.738 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.738 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.738 [2024-11-26 13:30:10.303489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:21.738 [2024-11-26 13:30:10.303639] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.998 [2024-11-26 13:30:10.373623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.998 [2024-11-26 13:30:10.373674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.998 [2024-11-26 13:30:10.373691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 86849 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 86849 ']' 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 86849 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86849 00:17:21.998 killing process with pid 86849 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86849' 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 86849 00:17:21.998 [2024-11-26 13:30:10.462854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.998 13:30:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 86849 00:17:21.998 [2024-11-26 13:30:10.474577] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.937 ************************************ 00:17:22.937 END TEST raid_state_function_test_sb_md_separate 00:17:22.937 ************************************ 00:17:22.937 13:30:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:22.937 00:17:22.937 real 0m5.235s 00:17:22.937 user 0m8.062s 00:17:22.937 sys 0m0.781s 00:17:22.937 13:30:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.937 13:30:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.937 13:30:11 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:22.937 13:30:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:22.937 13:30:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.937 13:30:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.937 ************************************ 00:17:22.937 START TEST raid_superblock_test_md_separate 00:17:22.937 ************************************ 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87102 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87102 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87102 ']' 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.937 13:30:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.937 [2024-11-26 13:30:11.482691] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:17:22.937 [2024-11-26 13:30:11.482888] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87102 ] 00:17:23.195 [2024-11-26 13:30:11.665320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.454 [2024-11-26 13:30:11.766214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.454 [2024-11-26 13:30:11.933578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.454 [2024-11-26 13:30:11.933635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.022 malloc1 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.022 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.022 [2024-11-26 13:30:12.479387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:24.023 [2024-11-26 13:30:12.479863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.023 [2024-11-26 13:30:12.480019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:24.023 [2024-11-26 13:30:12.480217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.023 [2024-11-26 13:30:12.482590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.023 [2024-11-26 13:30:12.482765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:24.023 pt1 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.023 malloc2 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.023 [2024-11-26 13:30:12.526046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:24.023 [2024-11-26 13:30:12.526272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.023 [2024-11-26 13:30:12.526308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:24.023 [2024-11-26 13:30:12.526322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.023 [2024-11-26 13:30:12.528568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.023 [2024-11-26 13:30:12.528607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:24.023 pt2 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.023 [2024-11-26 13:30:12.534082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:24.023 [2024-11-26 13:30:12.536190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:24.023 [2024-11-26 13:30:12.536438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:24.023 [2024-11-26 13:30:12.536459] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:24.023 [2024-11-26 13:30:12.536547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:24.023 [2024-11-26 13:30:12.536703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:24.023 [2024-11-26 13:30:12.536720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:24.023 [2024-11-26 13:30:12.536840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.023 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.282 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.282 "name": "raid_bdev1", 00:17:24.282 "uuid": "eea31e4a-c723-4055-ad29-c6d76ac3aaea", 00:17:24.282 "strip_size_kb": 0, 00:17:24.282 "state": "online", 00:17:24.282 "raid_level": "raid1", 00:17:24.282 "superblock": true, 00:17:24.282 "num_base_bdevs": 2, 00:17:24.282 "num_base_bdevs_discovered": 2, 00:17:24.282 "num_base_bdevs_operational": 2, 00:17:24.282 "base_bdevs_list": [ 00:17:24.282 { 00:17:24.282 "name": "pt1", 00:17:24.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:24.282 "is_configured": true, 00:17:24.282 "data_offset": 256, 00:17:24.282 "data_size": 7936 00:17:24.282 }, 00:17:24.282 { 00:17:24.282 "name": "pt2", 00:17:24.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.282 "is_configured": true, 00:17:24.282 "data_offset": 256, 00:17:24.282 "data_size": 7936 00:17:24.282 } 00:17:24.282 ] 00:17:24.282 }' 00:17:24.282 13:30:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.282 13:30:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.542 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:24.542 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:24.542 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:24.542 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:24.542 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:24.542 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:24.542 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:24.542 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:24.542 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.542 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.542 [2024-11-26 13:30:13.074456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.542 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.801 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:24.801 "name": "raid_bdev1", 00:17:24.801 "aliases": [ 00:17:24.801 "eea31e4a-c723-4055-ad29-c6d76ac3aaea" 00:17:24.801 ], 00:17:24.801 "product_name": "Raid Volume", 00:17:24.801 "block_size": 4096, 00:17:24.801 "num_blocks": 7936, 00:17:24.801 "uuid": "eea31e4a-c723-4055-ad29-c6d76ac3aaea", 00:17:24.801 "md_size": 32, 00:17:24.801 "md_interleave": false, 00:17:24.801 "dif_type": 0, 00:17:24.801 "assigned_rate_limits": { 00:17:24.801 "rw_ios_per_sec": 0, 00:17:24.801 "rw_mbytes_per_sec": 0, 00:17:24.801 "r_mbytes_per_sec": 0, 00:17:24.801 "w_mbytes_per_sec": 0 00:17:24.801 }, 00:17:24.801 "claimed": false, 00:17:24.801 "zoned": false, 00:17:24.801 "supported_io_types": { 00:17:24.801 "read": true, 00:17:24.801 "write": true, 00:17:24.801 "unmap": false, 00:17:24.801 "flush": false, 00:17:24.801 "reset": true, 00:17:24.801 "nvme_admin": false, 00:17:24.801 "nvme_io": false, 00:17:24.801 "nvme_io_md": false, 00:17:24.801 "write_zeroes": true, 00:17:24.801 "zcopy": false, 00:17:24.801 "get_zone_info": false, 00:17:24.801 "zone_management": false, 00:17:24.801 "zone_append": false, 00:17:24.801 "compare": false, 00:17:24.801 "compare_and_write": false, 00:17:24.801 "abort": false, 00:17:24.801 "seek_hole": false, 00:17:24.801 "seek_data": false, 00:17:24.801 "copy": false, 00:17:24.801 "nvme_iov_md": false 00:17:24.801 }, 00:17:24.801 "memory_domains": [ 00:17:24.801 { 00:17:24.801 "dma_device_id": "system", 00:17:24.801 "dma_device_type": 1 00:17:24.801 }, 00:17:24.801 { 00:17:24.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.801 "dma_device_type": 2 00:17:24.801 }, 00:17:24.801 { 00:17:24.801 "dma_device_id": "system", 00:17:24.801 "dma_device_type": 1 00:17:24.801 }, 00:17:24.801 { 00:17:24.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.802 "dma_device_type": 2 00:17:24.802 } 00:17:24.802 ], 00:17:24.802 "driver_specific": { 00:17:24.802 "raid": { 00:17:24.802 "uuid": "eea31e4a-c723-4055-ad29-c6d76ac3aaea", 00:17:24.802 "strip_size_kb": 0, 00:17:24.802 "state": "online", 00:17:24.802 "raid_level": "raid1", 00:17:24.802 "superblock": true, 00:17:24.802 "num_base_bdevs": 2, 00:17:24.802 "num_base_bdevs_discovered": 2, 00:17:24.802 "num_base_bdevs_operational": 2, 00:17:24.802 "base_bdevs_list": [ 00:17:24.802 { 00:17:24.802 "name": "pt1", 00:17:24.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:24.802 "is_configured": true, 00:17:24.802 "data_offset": 256, 00:17:24.802 "data_size": 7936 00:17:24.802 }, 00:17:24.802 { 00:17:24.802 "name": "pt2", 00:17:24.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.802 "is_configured": true, 00:17:24.802 "data_offset": 256, 00:17:24.802 "data_size": 7936 00:17:24.802 } 00:17:24.802 ] 00:17:24.802 } 00:17:24.802 } 00:17:24.802 }' 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:24.802 pt2' 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:24.802 [2024-11-26 13:30:13.338480] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.802 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eea31e4a-c723-4055-ad29-c6d76ac3aaea 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z eea31e4a-c723-4055-ad29-c6d76ac3aaea ']' 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.062 [2024-11-26 13:30:13.390187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.062 [2024-11-26 13:30:13.390356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:25.062 [2024-11-26 13:30:13.390456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.062 [2024-11-26 13:30:13.390517] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.062 [2024-11-26 13:30:13.390534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.062 [2024-11-26 13:30:13.526216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:25.062 [2024-11-26 13:30:13.528432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:25.062 [2024-11-26 13:30:13.528645] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:25.062 [2024-11-26 13:30:13.528861] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:25.062 [2024-11-26 13:30:13.529111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.062 [2024-11-26 13:30:13.529266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:25.062 request: 00:17:25.062 { 00:17:25.062 "name": "raid_bdev1", 00:17:25.062 "raid_level": "raid1", 00:17:25.062 "base_bdevs": [ 00:17:25.062 "malloc1", 00:17:25.062 "malloc2" 00:17:25.062 ], 00:17:25.062 "superblock": false, 00:17:25.062 "method": "bdev_raid_create", 00:17:25.062 "req_id": 1 00:17:25.062 } 00:17:25.062 Got JSON-RPC error response 00:17:25.062 response: 00:17:25.062 { 00:17:25.062 "code": -17, 00:17:25.062 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:25.062 } 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.062 [2024-11-26 13:30:13.594220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:25.062 [2024-11-26 13:30:13.594423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.062 [2024-11-26 13:30:13.594481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:25.062 [2024-11-26 13:30:13.594641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.062 [2024-11-26 13:30:13.596944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.062 [2024-11-26 13:30:13.597117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:25.062 [2024-11-26 13:30:13.597294] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:25.062 [2024-11-26 13:30:13.597399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:25.062 pt1 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.062 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.063 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.063 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.063 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.063 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.063 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.063 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.063 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.063 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.063 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.322 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.322 "name": "raid_bdev1", 00:17:25.322 "uuid": "eea31e4a-c723-4055-ad29-c6d76ac3aaea", 00:17:25.322 "strip_size_kb": 0, 00:17:25.322 "state": "configuring", 00:17:25.322 "raid_level": "raid1", 00:17:25.322 "superblock": true, 00:17:25.322 "num_base_bdevs": 2, 00:17:25.322 "num_base_bdevs_discovered": 1, 00:17:25.322 "num_base_bdevs_operational": 2, 00:17:25.322 "base_bdevs_list": [ 00:17:25.322 { 00:17:25.322 "name": "pt1", 00:17:25.322 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:25.322 "is_configured": true, 00:17:25.322 "data_offset": 256, 00:17:25.322 "data_size": 7936 00:17:25.322 }, 00:17:25.322 { 00:17:25.322 "name": null, 00:17:25.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.322 "is_configured": false, 00:17:25.322 "data_offset": 256, 00:17:25.322 "data_size": 7936 00:17:25.322 } 00:17:25.322 ] 00:17:25.322 }' 00:17:25.322 13:30:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.322 13:30:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.581 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:25.581 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:25.581 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:25.581 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:25.581 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.581 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.581 [2024-11-26 13:30:14.130321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:25.581 [2024-11-26 13:30:14.130399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.581 [2024-11-26 13:30:14.130421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:25.581 [2024-11-26 13:30:14.130434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.581 [2024-11-26 13:30:14.130623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.581 [2024-11-26 13:30:14.130648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:25.581 [2024-11-26 13:30:14.130689] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:25.581 [2024-11-26 13:30:14.130714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:25.581 [2024-11-26 13:30:14.130813] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:25.581 [2024-11-26 13:30:14.130830] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:25.581 [2024-11-26 13:30:14.130926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:25.581 [2024-11-26 13:30:14.131046] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:25.581 [2024-11-26 13:30:14.131058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:25.581 [2024-11-26 13:30:14.131150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.581 pt2 00:17:25.581 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.581 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:25.581 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:25.581 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:25.581 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.581 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.581 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.581 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.582 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.582 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.582 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.582 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.582 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.582 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.582 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.582 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.582 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.840 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.840 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.840 "name": "raid_bdev1", 00:17:25.840 "uuid": "eea31e4a-c723-4055-ad29-c6d76ac3aaea", 00:17:25.840 "strip_size_kb": 0, 00:17:25.840 "state": "online", 00:17:25.840 "raid_level": "raid1", 00:17:25.841 "superblock": true, 00:17:25.841 "num_base_bdevs": 2, 00:17:25.841 "num_base_bdevs_discovered": 2, 00:17:25.841 "num_base_bdevs_operational": 2, 00:17:25.841 "base_bdevs_list": [ 00:17:25.841 { 00:17:25.841 "name": "pt1", 00:17:25.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:25.841 "is_configured": true, 00:17:25.841 "data_offset": 256, 00:17:25.841 "data_size": 7936 00:17:25.841 }, 00:17:25.841 { 00:17:25.841 "name": "pt2", 00:17:25.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.841 "is_configured": true, 00:17:25.841 "data_offset": 256, 00:17:25.841 "data_size": 7936 00:17:25.841 } 00:17:25.841 ] 00:17:25.841 }' 00:17:25.841 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.841 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.099 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:26.099 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:26.099 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:26.099 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:26.099 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:26.099 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:26.099 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:26.099 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:26.099 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.099 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.359 [2024-11-26 13:30:14.666657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:26.359 "name": "raid_bdev1", 00:17:26.359 "aliases": [ 00:17:26.359 "eea31e4a-c723-4055-ad29-c6d76ac3aaea" 00:17:26.359 ], 00:17:26.359 "product_name": "Raid Volume", 00:17:26.359 "block_size": 4096, 00:17:26.359 "num_blocks": 7936, 00:17:26.359 "uuid": "eea31e4a-c723-4055-ad29-c6d76ac3aaea", 00:17:26.359 "md_size": 32, 00:17:26.359 "md_interleave": false, 00:17:26.359 "dif_type": 0, 00:17:26.359 "assigned_rate_limits": { 00:17:26.359 "rw_ios_per_sec": 0, 00:17:26.359 "rw_mbytes_per_sec": 0, 00:17:26.359 "r_mbytes_per_sec": 0, 00:17:26.359 "w_mbytes_per_sec": 0 00:17:26.359 }, 00:17:26.359 "claimed": false, 00:17:26.359 "zoned": false, 00:17:26.359 "supported_io_types": { 00:17:26.359 "read": true, 00:17:26.359 "write": true, 00:17:26.359 "unmap": false, 00:17:26.359 "flush": false, 00:17:26.359 "reset": true, 00:17:26.359 "nvme_admin": false, 00:17:26.359 "nvme_io": false, 00:17:26.359 "nvme_io_md": false, 00:17:26.359 "write_zeroes": true, 00:17:26.359 "zcopy": false, 00:17:26.359 "get_zone_info": false, 00:17:26.359 "zone_management": false, 00:17:26.359 "zone_append": false, 00:17:26.359 "compare": false, 00:17:26.359 "compare_and_write": false, 00:17:26.359 "abort": false, 00:17:26.359 "seek_hole": false, 00:17:26.359 "seek_data": false, 00:17:26.359 "copy": false, 00:17:26.359 "nvme_iov_md": false 00:17:26.359 }, 00:17:26.359 "memory_domains": [ 00:17:26.359 { 00:17:26.359 "dma_device_id": "system", 00:17:26.359 "dma_device_type": 1 00:17:26.359 }, 00:17:26.359 { 00:17:26.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.359 "dma_device_type": 2 00:17:26.359 }, 00:17:26.359 { 00:17:26.359 "dma_device_id": "system", 00:17:26.359 "dma_device_type": 1 00:17:26.359 }, 00:17:26.359 { 00:17:26.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.359 "dma_device_type": 2 00:17:26.359 } 00:17:26.359 ], 00:17:26.359 "driver_specific": { 00:17:26.359 "raid": { 00:17:26.359 "uuid": "eea31e4a-c723-4055-ad29-c6d76ac3aaea", 00:17:26.359 "strip_size_kb": 0, 00:17:26.359 "state": "online", 00:17:26.359 "raid_level": "raid1", 00:17:26.359 "superblock": true, 00:17:26.359 "num_base_bdevs": 2, 00:17:26.359 "num_base_bdevs_discovered": 2, 00:17:26.359 "num_base_bdevs_operational": 2, 00:17:26.359 "base_bdevs_list": [ 00:17:26.359 { 00:17:26.359 "name": "pt1", 00:17:26.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:26.359 "is_configured": true, 00:17:26.359 "data_offset": 256, 00:17:26.359 "data_size": 7936 00:17:26.359 }, 00:17:26.359 { 00:17:26.359 "name": "pt2", 00:17:26.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:26.359 "is_configured": true, 00:17:26.359 "data_offset": 256, 00:17:26.359 "data_size": 7936 00:17:26.359 } 00:17:26.359 ] 00:17:26.359 } 00:17:26.359 } 00:17:26.359 }' 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:26.359 pt2' 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.359 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.619 [2024-11-26 13:30:14.938759] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' eea31e4a-c723-4055-ad29-c6d76ac3aaea '!=' eea31e4a-c723-4055-ad29-c6d76ac3aaea ']' 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.619 [2024-11-26 13:30:14.986525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.619 13:30:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.619 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.619 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.619 "name": "raid_bdev1", 00:17:26.619 "uuid": "eea31e4a-c723-4055-ad29-c6d76ac3aaea", 00:17:26.619 "strip_size_kb": 0, 00:17:26.619 "state": "online", 00:17:26.619 "raid_level": "raid1", 00:17:26.619 "superblock": true, 00:17:26.619 "num_base_bdevs": 2, 00:17:26.619 "num_base_bdevs_discovered": 1, 00:17:26.619 "num_base_bdevs_operational": 1, 00:17:26.619 "base_bdevs_list": [ 00:17:26.619 { 00:17:26.619 "name": null, 00:17:26.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.619 "is_configured": false, 00:17:26.619 "data_offset": 0, 00:17:26.619 "data_size": 7936 00:17:26.619 }, 00:17:26.619 { 00:17:26.619 "name": "pt2", 00:17:26.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:26.619 "is_configured": true, 00:17:26.619 "data_offset": 256, 00:17:26.619 "data_size": 7936 00:17:26.619 } 00:17:26.619 ] 00:17:26.619 }' 00:17:26.619 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.619 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.188 [2024-11-26 13:30:15.506624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.188 [2024-11-26 13:30:15.506648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.188 [2024-11-26 13:30:15.506698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.188 [2024-11-26 13:30:15.506743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.188 [2024-11-26 13:30:15.506759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.188 [2024-11-26 13:30:15.578631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:27.188 [2024-11-26 13:30:15.578837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.188 [2024-11-26 13:30:15.578888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:27.188 [2024-11-26 13:30:15.578904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.188 [2024-11-26 13:30:15.581106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.188 [2024-11-26 13:30:15.581153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:27.188 [2024-11-26 13:30:15.581198] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:27.188 [2024-11-26 13:30:15.581270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:27.188 [2024-11-26 13:30:15.581362] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:27.188 [2024-11-26 13:30:15.581381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:27.188 [2024-11-26 13:30:15.581447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:27.188 [2024-11-26 13:30:15.581562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:27.188 [2024-11-26 13:30:15.581574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:27.188 [2024-11-26 13:30:15.581679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.188 pt2 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.188 "name": "raid_bdev1", 00:17:27.188 "uuid": "eea31e4a-c723-4055-ad29-c6d76ac3aaea", 00:17:27.188 "strip_size_kb": 0, 00:17:27.188 "state": "online", 00:17:27.188 "raid_level": "raid1", 00:17:27.188 "superblock": true, 00:17:27.188 "num_base_bdevs": 2, 00:17:27.188 "num_base_bdevs_discovered": 1, 00:17:27.188 "num_base_bdevs_operational": 1, 00:17:27.188 "base_bdevs_list": [ 00:17:27.188 { 00:17:27.188 "name": null, 00:17:27.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.188 "is_configured": false, 00:17:27.188 "data_offset": 256, 00:17:27.188 "data_size": 7936 00:17:27.188 }, 00:17:27.188 { 00:17:27.188 "name": "pt2", 00:17:27.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:27.188 "is_configured": true, 00:17:27.188 "data_offset": 256, 00:17:27.188 "data_size": 7936 00:17:27.188 } 00:17:27.188 ] 00:17:27.188 }' 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.188 13:30:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.757 [2024-11-26 13:30:16.110725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.757 [2024-11-26 13:30:16.110751] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.757 [2024-11-26 13:30:16.110796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.757 [2024-11-26 13:30:16.110839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.757 [2024-11-26 13:30:16.110851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.757 [2024-11-26 13:30:16.174776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:27.757 [2024-11-26 13:30:16.174995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.757 [2024-11-26 13:30:16.175030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:27.757 [2024-11-26 13:30:16.175044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.757 [2024-11-26 13:30:16.177353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.757 [2024-11-26 13:30:16.177392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:27.757 [2024-11-26 13:30:16.177461] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:27.757 [2024-11-26 13:30:16.177504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:27.757 [2024-11-26 13:30:16.177646] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:27.757 [2024-11-26 13:30:16.177665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.757 [2024-11-26 13:30:16.177683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:27.757 [2024-11-26 13:30:16.177743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:27.757 [2024-11-26 13:30:16.177818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:27.757 [2024-11-26 13:30:16.177831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:27.757 [2024-11-26 13:30:16.177904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:27.757 [2024-11-26 13:30:16.178029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:27.757 [2024-11-26 13:30:16.178090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:27.757 [2024-11-26 13:30:16.178205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.757 pt1 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.757 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.758 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.758 "name": "raid_bdev1", 00:17:27.758 "uuid": "eea31e4a-c723-4055-ad29-c6d76ac3aaea", 00:17:27.758 "strip_size_kb": 0, 00:17:27.758 "state": "online", 00:17:27.758 "raid_level": "raid1", 00:17:27.758 "superblock": true, 00:17:27.758 "num_base_bdevs": 2, 00:17:27.758 "num_base_bdevs_discovered": 1, 00:17:27.758 "num_base_bdevs_operational": 1, 00:17:27.758 "base_bdevs_list": [ 00:17:27.758 { 00:17:27.758 "name": null, 00:17:27.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.758 "is_configured": false, 00:17:27.758 "data_offset": 256, 00:17:27.758 "data_size": 7936 00:17:27.758 }, 00:17:27.758 { 00:17:27.758 "name": "pt2", 00:17:27.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:27.758 "is_configured": true, 00:17:27.758 "data_offset": 256, 00:17:27.758 "data_size": 7936 00:17:27.758 } 00:17:27.758 ] 00:17:27.758 }' 00:17:27.758 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.758 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.326 [2024-11-26 13:30:16.755235] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' eea31e4a-c723-4055-ad29-c6d76ac3aaea '!=' eea31e4a-c723-4055-ad29-c6d76ac3aaea ']' 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87102 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87102 ']' 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87102 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87102 00:17:28.326 killing process with pid 87102 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87102' 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87102 00:17:28.326 [2024-11-26 13:30:16.833350] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.326 [2024-11-26 13:30:16.833410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.326 [2024-11-26 13:30:16.833450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.326 [2024-11-26 13:30:16.833468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:28.326 13:30:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87102 00:17:28.585 [2024-11-26 13:30:16.983446] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:29.523 13:30:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:29.523 00:17:29.523 real 0m6.446s 00:17:29.523 user 0m10.409s 00:17:29.523 sys 0m0.934s 00:17:29.523 13:30:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.523 ************************************ 00:17:29.523 END TEST raid_superblock_test_md_separate 00:17:29.523 ************************************ 00:17:29.523 13:30:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.523 13:30:17 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:29.523 13:30:17 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:29.523 13:30:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:29.523 13:30:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.523 13:30:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:29.523 ************************************ 00:17:29.523 START TEST raid_rebuild_test_sb_md_separate 00:17:29.523 ************************************ 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87431 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87431 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87431 ']' 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.523 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.523 [2024-11-26 13:30:18.003139] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:17:29.523 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:29.523 Zero copy mechanism will not be used. 00:17:29.523 [2024-11-26 13:30:18.003663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87431 ] 00:17:29.783 [2024-11-26 13:30:18.183859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.783 [2024-11-26 13:30:18.283122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.042 [2024-11-26 13:30:18.451295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.042 [2024-11-26 13:30:18.451342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.611 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.611 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:30.611 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:30.611 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:30.611 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.611 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.611 BaseBdev1_malloc 00:17:30.611 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.611 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:30.611 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.611 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.611 [2024-11-26 13:30:19.021377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:30.611 [2024-11-26 13:30:19.021467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.611 [2024-11-26 13:30:19.021496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:30.611 [2024-11-26 13:30:19.021512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.611 [2024-11-26 13:30:19.023792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.611 [2024-11-26 13:30:19.023835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:30.611 BaseBdev1 00:17:30.611 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.611 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:30.611 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:30.611 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.611 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.611 BaseBdev2_malloc 00:17:30.611 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.611 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.612 [2024-11-26 13:30:19.063948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:30.612 [2024-11-26 13:30:19.064029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.612 [2024-11-26 13:30:19.064053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:30.612 [2024-11-26 13:30:19.064070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.612 [2024-11-26 13:30:19.066192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.612 [2024-11-26 13:30:19.066248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:30.612 BaseBdev2 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.612 spare_malloc 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.612 spare_delay 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.612 [2024-11-26 13:30:19.127576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:30.612 [2024-11-26 13:30:19.127644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.612 [2024-11-26 13:30:19.127670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:30.612 [2024-11-26 13:30:19.127685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.612 [2024-11-26 13:30:19.129848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.612 [2024-11-26 13:30:19.129895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:30.612 spare 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.612 [2024-11-26 13:30:19.139625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.612 [2024-11-26 13:30:19.141573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:30.612 [2024-11-26 13:30:19.141769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:30.612 [2024-11-26 13:30:19.141789] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:30.612 [2024-11-26 13:30:19.141866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:30.612 [2024-11-26 13:30:19.142009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:30.612 [2024-11-26 13:30:19.142023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:30.612 [2024-11-26 13:30:19.142132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.612 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.870 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.870 "name": "raid_bdev1", 00:17:30.870 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:30.870 "strip_size_kb": 0, 00:17:30.870 "state": "online", 00:17:30.870 "raid_level": "raid1", 00:17:30.870 "superblock": true, 00:17:30.870 "num_base_bdevs": 2, 00:17:30.870 "num_base_bdevs_discovered": 2, 00:17:30.870 "num_base_bdevs_operational": 2, 00:17:30.870 "base_bdevs_list": [ 00:17:30.870 { 00:17:30.870 "name": "BaseBdev1", 00:17:30.870 "uuid": "5ec39893-1b1b-515c-a57d-1966103a2d01", 00:17:30.870 "is_configured": true, 00:17:30.870 "data_offset": 256, 00:17:30.870 "data_size": 7936 00:17:30.870 }, 00:17:30.870 { 00:17:30.870 "name": "BaseBdev2", 00:17:30.870 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:30.870 "is_configured": true, 00:17:30.870 "data_offset": 256, 00:17:30.870 "data_size": 7936 00:17:30.870 } 00:17:30.870 ] 00:17:30.870 }' 00:17:30.870 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.870 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.128 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:31.128 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.128 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.128 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:31.128 [2024-11-26 13:30:19.635958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.128 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.129 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:31.129 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.129 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.129 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.129 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:31.387 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.387 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:31.387 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:31.387 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:31.387 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:31.387 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:31.387 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.387 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:31.387 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:31.387 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:31.387 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:31.387 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:31.387 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:31.387 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:31.387 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:31.646 [2024-11-26 13:30:19.999811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:31.646 /dev/nbd0 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.646 1+0 records in 00:17:31.646 1+0 records out 00:17:31.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236852 s, 17.3 MB/s 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:31.646 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:32.581 7936+0 records in 00:17:32.581 7936+0 records out 00:17:32.581 32505856 bytes (33 MB, 31 MiB) copied, 0.756064 s, 43.0 MB/s 00:17:32.581 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:32.581 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.581 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:32.581 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:32.582 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:32.582 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.582 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.582 [2024-11-26 13:30:21.095573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.582 [2024-11-26 13:30:21.103654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.582 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.840 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.840 "name": "raid_bdev1", 00:17:32.840 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:32.840 "strip_size_kb": 0, 00:17:32.840 "state": "online", 00:17:32.840 "raid_level": "raid1", 00:17:32.840 "superblock": true, 00:17:32.840 "num_base_bdevs": 2, 00:17:32.840 "num_base_bdevs_discovered": 1, 00:17:32.840 "num_base_bdevs_operational": 1, 00:17:32.840 "base_bdevs_list": [ 00:17:32.840 { 00:17:32.840 "name": null, 00:17:32.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.840 "is_configured": false, 00:17:32.840 "data_offset": 0, 00:17:32.840 "data_size": 7936 00:17:32.840 }, 00:17:32.840 { 00:17:32.840 "name": "BaseBdev2", 00:17:32.840 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:32.841 "is_configured": true, 00:17:32.841 "data_offset": 256, 00:17:32.841 "data_size": 7936 00:17:32.841 } 00:17:32.841 ] 00:17:32.841 }' 00:17:32.841 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.841 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.099 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:33.099 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.099 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.099 [2024-11-26 13:30:21.579739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:33.099 [2024-11-26 13:30:21.590492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:33.099 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.099 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:33.099 [2024-11-26 13:30:21.592568] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:34.036 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.036 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.036 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.036 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.036 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.036 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.036 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.036 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.036 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.296 "name": "raid_bdev1", 00:17:34.296 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:34.296 "strip_size_kb": 0, 00:17:34.296 "state": "online", 00:17:34.296 "raid_level": "raid1", 00:17:34.296 "superblock": true, 00:17:34.296 "num_base_bdevs": 2, 00:17:34.296 "num_base_bdevs_discovered": 2, 00:17:34.296 "num_base_bdevs_operational": 2, 00:17:34.296 "process": { 00:17:34.296 "type": "rebuild", 00:17:34.296 "target": "spare", 00:17:34.296 "progress": { 00:17:34.296 "blocks": 2560, 00:17:34.296 "percent": 32 00:17:34.296 } 00:17:34.296 }, 00:17:34.296 "base_bdevs_list": [ 00:17:34.296 { 00:17:34.296 "name": "spare", 00:17:34.296 "uuid": "45336985-aadf-5554-9625-5e0cbcf14e96", 00:17:34.296 "is_configured": true, 00:17:34.296 "data_offset": 256, 00:17:34.296 "data_size": 7936 00:17:34.296 }, 00:17:34.296 { 00:17:34.296 "name": "BaseBdev2", 00:17:34.296 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:34.296 "is_configured": true, 00:17:34.296 "data_offset": 256, 00:17:34.296 "data_size": 7936 00:17:34.296 } 00:17:34.296 ] 00:17:34.296 }' 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.296 [2024-11-26 13:30:22.758674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.296 [2024-11-26 13:30:22.799785] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:34.296 [2024-11-26 13:30:22.799851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.296 [2024-11-26 13:30:22.799871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.296 [2024-11-26 13:30:22.799882] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.296 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.555 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.555 "name": "raid_bdev1", 00:17:34.555 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:34.555 "strip_size_kb": 0, 00:17:34.555 "state": "online", 00:17:34.555 "raid_level": "raid1", 00:17:34.555 "superblock": true, 00:17:34.555 "num_base_bdevs": 2, 00:17:34.555 "num_base_bdevs_discovered": 1, 00:17:34.555 "num_base_bdevs_operational": 1, 00:17:34.555 "base_bdevs_list": [ 00:17:34.555 { 00:17:34.555 "name": null, 00:17:34.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.555 "is_configured": false, 00:17:34.555 "data_offset": 0, 00:17:34.555 "data_size": 7936 00:17:34.555 }, 00:17:34.555 { 00:17:34.555 "name": "BaseBdev2", 00:17:34.555 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:34.555 "is_configured": true, 00:17:34.555 "data_offset": 256, 00:17:34.555 "data_size": 7936 00:17:34.555 } 00:17:34.555 ] 00:17:34.555 }' 00:17:34.555 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.555 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.814 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:34.814 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.814 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:34.814 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:34.814 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.814 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.814 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.814 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.814 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.814 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.814 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.814 "name": "raid_bdev1", 00:17:34.814 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:34.814 "strip_size_kb": 0, 00:17:34.814 "state": "online", 00:17:34.814 "raid_level": "raid1", 00:17:34.814 "superblock": true, 00:17:34.814 "num_base_bdevs": 2, 00:17:34.814 "num_base_bdevs_discovered": 1, 00:17:34.814 "num_base_bdevs_operational": 1, 00:17:34.814 "base_bdevs_list": [ 00:17:34.814 { 00:17:34.814 "name": null, 00:17:34.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.814 "is_configured": false, 00:17:34.814 "data_offset": 0, 00:17:34.814 "data_size": 7936 00:17:34.814 }, 00:17:34.814 { 00:17:34.814 "name": "BaseBdev2", 00:17:34.814 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:34.814 "is_configured": true, 00:17:34.814 "data_offset": 256, 00:17:34.814 "data_size": 7936 00:17:34.814 } 00:17:34.814 ] 00:17:34.814 }' 00:17:34.814 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.073 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:35.073 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.073 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:35.073 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:35.073 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.073 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.073 [2024-11-26 13:30:23.437946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.073 [2024-11-26 13:30:23.447391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:35.073 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.073 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:35.073 [2024-11-26 13:30:23.449464] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:36.009 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.009 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.009 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.009 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.009 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.009 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.009 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.009 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.009 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.009 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.009 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.009 "name": "raid_bdev1", 00:17:36.009 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:36.009 "strip_size_kb": 0, 00:17:36.009 "state": "online", 00:17:36.009 "raid_level": "raid1", 00:17:36.009 "superblock": true, 00:17:36.009 "num_base_bdevs": 2, 00:17:36.009 "num_base_bdevs_discovered": 2, 00:17:36.009 "num_base_bdevs_operational": 2, 00:17:36.009 "process": { 00:17:36.009 "type": "rebuild", 00:17:36.009 "target": "spare", 00:17:36.009 "progress": { 00:17:36.009 "blocks": 2560, 00:17:36.009 "percent": 32 00:17:36.009 } 00:17:36.009 }, 00:17:36.009 "base_bdevs_list": [ 00:17:36.009 { 00:17:36.009 "name": "spare", 00:17:36.009 "uuid": "45336985-aadf-5554-9625-5e0cbcf14e96", 00:17:36.009 "is_configured": true, 00:17:36.009 "data_offset": 256, 00:17:36.009 "data_size": 7936 00:17:36.009 }, 00:17:36.009 { 00:17:36.009 "name": "BaseBdev2", 00:17:36.009 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:36.009 "is_configured": true, 00:17:36.009 "data_offset": 256, 00:17:36.009 "data_size": 7936 00:17:36.009 } 00:17:36.009 ] 00:17:36.009 }' 00:17:36.009 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.009 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.009 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:36.269 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=726 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.269 "name": "raid_bdev1", 00:17:36.269 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:36.269 "strip_size_kb": 0, 00:17:36.269 "state": "online", 00:17:36.269 "raid_level": "raid1", 00:17:36.269 "superblock": true, 00:17:36.269 "num_base_bdevs": 2, 00:17:36.269 "num_base_bdevs_discovered": 2, 00:17:36.269 "num_base_bdevs_operational": 2, 00:17:36.269 "process": { 00:17:36.269 "type": "rebuild", 00:17:36.269 "target": "spare", 00:17:36.269 "progress": { 00:17:36.269 "blocks": 2816, 00:17:36.269 "percent": 35 00:17:36.269 } 00:17:36.269 }, 00:17:36.269 "base_bdevs_list": [ 00:17:36.269 { 00:17:36.269 "name": "spare", 00:17:36.269 "uuid": "45336985-aadf-5554-9625-5e0cbcf14e96", 00:17:36.269 "is_configured": true, 00:17:36.269 "data_offset": 256, 00:17:36.269 "data_size": 7936 00:17:36.269 }, 00:17:36.269 { 00:17:36.269 "name": "BaseBdev2", 00:17:36.269 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:36.269 "is_configured": true, 00:17:36.269 "data_offset": 256, 00:17:36.269 "data_size": 7936 00:17:36.269 } 00:17:36.269 ] 00:17:36.269 }' 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.269 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.647 "name": "raid_bdev1", 00:17:37.647 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:37.647 "strip_size_kb": 0, 00:17:37.647 "state": "online", 00:17:37.647 "raid_level": "raid1", 00:17:37.647 "superblock": true, 00:17:37.647 "num_base_bdevs": 2, 00:17:37.647 "num_base_bdevs_discovered": 2, 00:17:37.647 "num_base_bdevs_operational": 2, 00:17:37.647 "process": { 00:17:37.647 "type": "rebuild", 00:17:37.647 "target": "spare", 00:17:37.647 "progress": { 00:17:37.647 "blocks": 5888, 00:17:37.647 "percent": 74 00:17:37.647 } 00:17:37.647 }, 00:17:37.647 "base_bdevs_list": [ 00:17:37.647 { 00:17:37.647 "name": "spare", 00:17:37.647 "uuid": "45336985-aadf-5554-9625-5e0cbcf14e96", 00:17:37.647 "is_configured": true, 00:17:37.647 "data_offset": 256, 00:17:37.647 "data_size": 7936 00:17:37.647 }, 00:17:37.647 { 00:17:37.647 "name": "BaseBdev2", 00:17:37.647 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:37.647 "is_configured": true, 00:17:37.647 "data_offset": 256, 00:17:37.647 "data_size": 7936 00:17:37.647 } 00:17:37.647 ] 00:17:37.647 }' 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.647 13:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:38.215 [2024-11-26 13:30:26.565612] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:38.215 [2024-11-26 13:30:26.565696] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:38.215 [2024-11-26 13:30:26.565803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.474 13:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:38.474 13:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.474 13:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.474 13:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.474 13:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.474 13:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.474 13:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.474 13:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.474 13:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.474 13:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.474 13:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.474 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.474 "name": "raid_bdev1", 00:17:38.474 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:38.474 "strip_size_kb": 0, 00:17:38.474 "state": "online", 00:17:38.474 "raid_level": "raid1", 00:17:38.474 "superblock": true, 00:17:38.474 "num_base_bdevs": 2, 00:17:38.474 "num_base_bdevs_discovered": 2, 00:17:38.474 "num_base_bdevs_operational": 2, 00:17:38.474 "base_bdevs_list": [ 00:17:38.474 { 00:17:38.474 "name": "spare", 00:17:38.474 "uuid": "45336985-aadf-5554-9625-5e0cbcf14e96", 00:17:38.474 "is_configured": true, 00:17:38.474 "data_offset": 256, 00:17:38.474 "data_size": 7936 00:17:38.474 }, 00:17:38.474 { 00:17:38.474 "name": "BaseBdev2", 00:17:38.474 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:38.474 "is_configured": true, 00:17:38.474 "data_offset": 256, 00:17:38.474 "data_size": 7936 00:17:38.474 } 00:17:38.474 ] 00:17:38.474 }' 00:17:38.474 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.733 "name": "raid_bdev1", 00:17:38.733 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:38.733 "strip_size_kb": 0, 00:17:38.733 "state": "online", 00:17:38.733 "raid_level": "raid1", 00:17:38.733 "superblock": true, 00:17:38.733 "num_base_bdevs": 2, 00:17:38.733 "num_base_bdevs_discovered": 2, 00:17:38.733 "num_base_bdevs_operational": 2, 00:17:38.733 "base_bdevs_list": [ 00:17:38.733 { 00:17:38.733 "name": "spare", 00:17:38.733 "uuid": "45336985-aadf-5554-9625-5e0cbcf14e96", 00:17:38.733 "is_configured": true, 00:17:38.733 "data_offset": 256, 00:17:38.733 "data_size": 7936 00:17:38.733 }, 00:17:38.733 { 00:17:38.733 "name": "BaseBdev2", 00:17:38.733 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:38.733 "is_configured": true, 00:17:38.733 "data_offset": 256, 00:17:38.733 "data_size": 7936 00:17:38.733 } 00:17:38.733 ] 00:17:38.733 }' 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.733 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.992 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.992 "name": "raid_bdev1", 00:17:38.992 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:38.992 "strip_size_kb": 0, 00:17:38.992 "state": "online", 00:17:38.992 "raid_level": "raid1", 00:17:38.992 "superblock": true, 00:17:38.992 "num_base_bdevs": 2, 00:17:38.992 "num_base_bdevs_discovered": 2, 00:17:38.992 "num_base_bdevs_operational": 2, 00:17:38.992 "base_bdevs_list": [ 00:17:38.992 { 00:17:38.992 "name": "spare", 00:17:38.992 "uuid": "45336985-aadf-5554-9625-5e0cbcf14e96", 00:17:38.992 "is_configured": true, 00:17:38.992 "data_offset": 256, 00:17:38.992 "data_size": 7936 00:17:38.992 }, 00:17:38.992 { 00:17:38.992 "name": "BaseBdev2", 00:17:38.992 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:38.992 "is_configured": true, 00:17:38.992 "data_offset": 256, 00:17:38.992 "data_size": 7936 00:17:38.992 } 00:17:38.992 ] 00:17:38.992 }' 00:17:38.992 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.992 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.251 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:39.251 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.251 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.251 [2024-11-26 13:30:27.771734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.251 [2024-11-26 13:30:27.771762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.251 [2024-11-26 13:30:27.771838] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.251 [2024-11-26 13:30:27.771903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.251 [2024-11-26 13:30:27.771916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:39.251 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.251 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.251 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:39.251 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.251 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.251 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.510 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:39.510 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:39.510 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:39.510 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:39.510 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:39.510 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:39.510 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:39.510 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:39.510 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:39.510 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:39.510 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:39.510 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:39.510 13:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:39.769 /dev/nbd0 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:39.769 1+0 records in 00:17:39.769 1+0 records out 00:17:39.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567372 s, 7.2 MB/s 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:39.769 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:40.028 /dev/nbd1 00:17:40.028 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:40.028 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:40.029 1+0 records in 00:17:40.029 1+0 records out 00:17:40.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324243 s, 12.6 MB/s 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:40.029 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:40.316 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:40.603 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:40.603 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:40.603 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:40.603 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:40.603 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:40.603 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:40.603 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:40.603 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:40.603 13:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.603 [2024-11-26 13:30:29.106302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:40.603 [2024-11-26 13:30:29.106373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.603 [2024-11-26 13:30:29.106404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:40.603 [2024-11-26 13:30:29.106416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.603 [2024-11-26 13:30:29.108535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.603 [2024-11-26 13:30:29.108573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:40.603 [2024-11-26 13:30:29.108636] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:40.603 [2024-11-26 13:30:29.108690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.603 [2024-11-26 13:30:29.108838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:40.603 spare 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.603 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.862 [2024-11-26 13:30:29.208925] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:40.862 [2024-11-26 13:30:29.208954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:40.862 [2024-11-26 13:30:29.209039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:40.862 [2024-11-26 13:30:29.209188] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:40.862 [2024-11-26 13:30:29.209203] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:40.862 [2024-11-26 13:30:29.209338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.862 "name": "raid_bdev1", 00:17:40.862 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:40.862 "strip_size_kb": 0, 00:17:40.862 "state": "online", 00:17:40.862 "raid_level": "raid1", 00:17:40.862 "superblock": true, 00:17:40.862 "num_base_bdevs": 2, 00:17:40.862 "num_base_bdevs_discovered": 2, 00:17:40.862 "num_base_bdevs_operational": 2, 00:17:40.862 "base_bdevs_list": [ 00:17:40.862 { 00:17:40.862 "name": "spare", 00:17:40.862 "uuid": "45336985-aadf-5554-9625-5e0cbcf14e96", 00:17:40.862 "is_configured": true, 00:17:40.862 "data_offset": 256, 00:17:40.862 "data_size": 7936 00:17:40.862 }, 00:17:40.862 { 00:17:40.862 "name": "BaseBdev2", 00:17:40.862 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:40.862 "is_configured": true, 00:17:40.862 "data_offset": 256, 00:17:40.862 "data_size": 7936 00:17:40.862 } 00:17:40.862 ] 00:17:40.862 }' 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.862 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.429 "name": "raid_bdev1", 00:17:41.429 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:41.429 "strip_size_kb": 0, 00:17:41.429 "state": "online", 00:17:41.429 "raid_level": "raid1", 00:17:41.429 "superblock": true, 00:17:41.429 "num_base_bdevs": 2, 00:17:41.429 "num_base_bdevs_discovered": 2, 00:17:41.429 "num_base_bdevs_operational": 2, 00:17:41.429 "base_bdevs_list": [ 00:17:41.429 { 00:17:41.429 "name": "spare", 00:17:41.429 "uuid": "45336985-aadf-5554-9625-5e0cbcf14e96", 00:17:41.429 "is_configured": true, 00:17:41.429 "data_offset": 256, 00:17:41.429 "data_size": 7936 00:17:41.429 }, 00:17:41.429 { 00:17:41.429 "name": "BaseBdev2", 00:17:41.429 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:41.429 "is_configured": true, 00:17:41.429 "data_offset": 256, 00:17:41.429 "data_size": 7936 00:17:41.429 } 00:17:41.429 ] 00:17:41.429 }' 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.429 [2024-11-26 13:30:29.890498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.429 "name": "raid_bdev1", 00:17:41.429 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:41.429 "strip_size_kb": 0, 00:17:41.429 "state": "online", 00:17:41.429 "raid_level": "raid1", 00:17:41.429 "superblock": true, 00:17:41.429 "num_base_bdevs": 2, 00:17:41.429 "num_base_bdevs_discovered": 1, 00:17:41.429 "num_base_bdevs_operational": 1, 00:17:41.429 "base_bdevs_list": [ 00:17:41.429 { 00:17:41.429 "name": null, 00:17:41.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.429 "is_configured": false, 00:17:41.429 "data_offset": 0, 00:17:41.429 "data_size": 7936 00:17:41.429 }, 00:17:41.429 { 00:17:41.429 "name": "BaseBdev2", 00:17:41.429 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:41.429 "is_configured": true, 00:17:41.429 "data_offset": 256, 00:17:41.429 "data_size": 7936 00:17:41.429 } 00:17:41.429 ] 00:17:41.429 }' 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.429 13:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.996 13:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:41.996 13:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.996 13:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.997 [2024-11-26 13:30:30.386610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:41.997 [2024-11-26 13:30:30.386744] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:41.997 [2024-11-26 13:30:30.386766] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:41.997 [2024-11-26 13:30:30.386800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:41.997 [2024-11-26 13:30:30.396693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:41.997 13:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.997 13:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:41.997 [2024-11-26 13:30:30.398756] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:42.934 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.934 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.934 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.934 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.934 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.934 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.934 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.934 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.934 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.934 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.934 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.934 "name": "raid_bdev1", 00:17:42.934 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:42.934 "strip_size_kb": 0, 00:17:42.934 "state": "online", 00:17:42.934 "raid_level": "raid1", 00:17:42.934 "superblock": true, 00:17:42.934 "num_base_bdevs": 2, 00:17:42.934 "num_base_bdevs_discovered": 2, 00:17:42.934 "num_base_bdevs_operational": 2, 00:17:42.934 "process": { 00:17:42.934 "type": "rebuild", 00:17:42.934 "target": "spare", 00:17:42.934 "progress": { 00:17:42.934 "blocks": 2560, 00:17:42.934 "percent": 32 00:17:42.934 } 00:17:42.934 }, 00:17:42.934 "base_bdevs_list": [ 00:17:42.934 { 00:17:42.934 "name": "spare", 00:17:42.934 "uuid": "45336985-aadf-5554-9625-5e0cbcf14e96", 00:17:42.934 "is_configured": true, 00:17:42.934 "data_offset": 256, 00:17:42.934 "data_size": 7936 00:17:42.934 }, 00:17:42.934 { 00:17:42.934 "name": "BaseBdev2", 00:17:42.934 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:42.934 "is_configured": true, 00:17:42.934 "data_offset": 256, 00:17:42.934 "data_size": 7936 00:17:42.934 } 00:17:42.934 ] 00:17:42.934 }' 00:17:42.934 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.194 [2024-11-26 13:30:31.568911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.194 [2024-11-26 13:30:31.605956] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:43.194 [2024-11-26 13:30:31.606020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.194 [2024-11-26 13:30:31.606039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.194 [2024-11-26 13:30:31.606060] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.194 "name": "raid_bdev1", 00:17:43.194 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:43.194 "strip_size_kb": 0, 00:17:43.194 "state": "online", 00:17:43.194 "raid_level": "raid1", 00:17:43.194 "superblock": true, 00:17:43.194 "num_base_bdevs": 2, 00:17:43.194 "num_base_bdevs_discovered": 1, 00:17:43.194 "num_base_bdevs_operational": 1, 00:17:43.194 "base_bdevs_list": [ 00:17:43.194 { 00:17:43.194 "name": null, 00:17:43.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.194 "is_configured": false, 00:17:43.194 "data_offset": 0, 00:17:43.194 "data_size": 7936 00:17:43.194 }, 00:17:43.194 { 00:17:43.194 "name": "BaseBdev2", 00:17:43.194 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:43.194 "is_configured": true, 00:17:43.194 "data_offset": 256, 00:17:43.194 "data_size": 7936 00:17:43.194 } 00:17:43.194 ] 00:17:43.194 }' 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.194 13:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.762 13:30:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:43.762 13:30:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.762 13:30:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.762 [2024-11-26 13:30:32.144146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:43.762 [2024-11-26 13:30:32.144198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.762 [2024-11-26 13:30:32.144226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:43.762 [2024-11-26 13:30:32.144254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.762 [2024-11-26 13:30:32.144470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.762 [2024-11-26 13:30:32.144497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:43.762 [2024-11-26 13:30:32.144549] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:43.762 [2024-11-26 13:30:32.144568] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:43.762 [2024-11-26 13:30:32.144578] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:43.762 [2024-11-26 13:30:32.144603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:43.762 [2024-11-26 13:30:32.153917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:43.762 spare 00:17:43.762 13:30:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.762 13:30:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:43.762 [2024-11-26 13:30:32.155927] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:44.699 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.699 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.699 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.699 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.699 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.699 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.699 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.699 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.699 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.699 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.699 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.699 "name": "raid_bdev1", 00:17:44.699 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:44.699 "strip_size_kb": 0, 00:17:44.699 "state": "online", 00:17:44.699 "raid_level": "raid1", 00:17:44.699 "superblock": true, 00:17:44.699 "num_base_bdevs": 2, 00:17:44.699 "num_base_bdevs_discovered": 2, 00:17:44.699 "num_base_bdevs_operational": 2, 00:17:44.699 "process": { 00:17:44.699 "type": "rebuild", 00:17:44.699 "target": "spare", 00:17:44.699 "progress": { 00:17:44.699 "blocks": 2560, 00:17:44.699 "percent": 32 00:17:44.699 } 00:17:44.699 }, 00:17:44.699 "base_bdevs_list": [ 00:17:44.699 { 00:17:44.699 "name": "spare", 00:17:44.699 "uuid": "45336985-aadf-5554-9625-5e0cbcf14e96", 00:17:44.699 "is_configured": true, 00:17:44.699 "data_offset": 256, 00:17:44.699 "data_size": 7936 00:17:44.699 }, 00:17:44.699 { 00:17:44.699 "name": "BaseBdev2", 00:17:44.699 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:44.699 "is_configured": true, 00:17:44.699 "data_offset": 256, 00:17:44.699 "data_size": 7936 00:17:44.699 } 00:17:44.699 ] 00:17:44.699 }' 00:17:44.699 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.958 [2024-11-26 13:30:33.318064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.958 [2024-11-26 13:30:33.362020] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:44.958 [2024-11-26 13:30:33.362079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.958 [2024-11-26 13:30:33.362102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.958 [2024-11-26 13:30:33.362111] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.958 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.959 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.959 "name": "raid_bdev1", 00:17:44.959 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:44.959 "strip_size_kb": 0, 00:17:44.959 "state": "online", 00:17:44.959 "raid_level": "raid1", 00:17:44.959 "superblock": true, 00:17:44.959 "num_base_bdevs": 2, 00:17:44.959 "num_base_bdevs_discovered": 1, 00:17:44.959 "num_base_bdevs_operational": 1, 00:17:44.959 "base_bdevs_list": [ 00:17:44.959 { 00:17:44.959 "name": null, 00:17:44.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.959 "is_configured": false, 00:17:44.959 "data_offset": 0, 00:17:44.959 "data_size": 7936 00:17:44.959 }, 00:17:44.959 { 00:17:44.959 "name": "BaseBdev2", 00:17:44.959 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:44.959 "is_configured": true, 00:17:44.959 "data_offset": 256, 00:17:44.959 "data_size": 7936 00:17:44.959 } 00:17:44.959 ] 00:17:44.959 }' 00:17:44.959 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.959 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.526 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:45.527 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.527 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:45.527 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:45.527 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.527 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.527 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.527 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.527 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.527 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.527 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.527 "name": "raid_bdev1", 00:17:45.527 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:45.527 "strip_size_kb": 0, 00:17:45.527 "state": "online", 00:17:45.527 "raid_level": "raid1", 00:17:45.527 "superblock": true, 00:17:45.527 "num_base_bdevs": 2, 00:17:45.527 "num_base_bdevs_discovered": 1, 00:17:45.527 "num_base_bdevs_operational": 1, 00:17:45.527 "base_bdevs_list": [ 00:17:45.527 { 00:17:45.527 "name": null, 00:17:45.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.527 "is_configured": false, 00:17:45.527 "data_offset": 0, 00:17:45.527 "data_size": 7936 00:17:45.527 }, 00:17:45.527 { 00:17:45.527 "name": "BaseBdev2", 00:17:45.527 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:45.527 "is_configured": true, 00:17:45.527 "data_offset": 256, 00:17:45.527 "data_size": 7936 00:17:45.527 } 00:17:45.527 ] 00:17:45.527 }' 00:17:45.527 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.527 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:45.527 13:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.527 13:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:45.527 13:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:45.527 13:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.527 13:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.527 13:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.527 13:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:45.527 13:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.527 13:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.527 [2024-11-26 13:30:34.056489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:45.527 [2024-11-26 13:30:34.056547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.527 [2024-11-26 13:30:34.056578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:45.527 [2024-11-26 13:30:34.056591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.527 [2024-11-26 13:30:34.056784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.527 [2024-11-26 13:30:34.056803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:45.527 [2024-11-26 13:30:34.056856] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:45.527 [2024-11-26 13:30:34.056875] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:45.527 [2024-11-26 13:30:34.056886] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:45.527 [2024-11-26 13:30:34.056896] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:45.527 BaseBdev1 00:17:45.527 13:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.527 13:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.905 "name": "raid_bdev1", 00:17:46.905 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:46.905 "strip_size_kb": 0, 00:17:46.905 "state": "online", 00:17:46.905 "raid_level": "raid1", 00:17:46.905 "superblock": true, 00:17:46.905 "num_base_bdevs": 2, 00:17:46.905 "num_base_bdevs_discovered": 1, 00:17:46.905 "num_base_bdevs_operational": 1, 00:17:46.905 "base_bdevs_list": [ 00:17:46.905 { 00:17:46.905 "name": null, 00:17:46.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.905 "is_configured": false, 00:17:46.905 "data_offset": 0, 00:17:46.905 "data_size": 7936 00:17:46.905 }, 00:17:46.905 { 00:17:46.905 "name": "BaseBdev2", 00:17:46.905 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:46.905 "is_configured": true, 00:17:46.905 "data_offset": 256, 00:17:46.905 "data_size": 7936 00:17:46.905 } 00:17:46.905 ] 00:17:46.905 }' 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.905 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.165 "name": "raid_bdev1", 00:17:47.165 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:47.165 "strip_size_kb": 0, 00:17:47.165 "state": "online", 00:17:47.165 "raid_level": "raid1", 00:17:47.165 "superblock": true, 00:17:47.165 "num_base_bdevs": 2, 00:17:47.165 "num_base_bdevs_discovered": 1, 00:17:47.165 "num_base_bdevs_operational": 1, 00:17:47.165 "base_bdevs_list": [ 00:17:47.165 { 00:17:47.165 "name": null, 00:17:47.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.165 "is_configured": false, 00:17:47.165 "data_offset": 0, 00:17:47.165 "data_size": 7936 00:17:47.165 }, 00:17:47.165 { 00:17:47.165 "name": "BaseBdev2", 00:17:47.165 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:47.165 "is_configured": true, 00:17:47.165 "data_offset": 256, 00:17:47.165 "data_size": 7936 00:17:47.165 } 00:17:47.165 ] 00:17:47.165 }' 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:47.165 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:47.425 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.425 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:47.425 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.425 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:47.425 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.425 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.425 [2024-11-26 13:30:35.736858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.425 [2024-11-26 13:30:35.736982] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:47.425 [2024-11-26 13:30:35.737003] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:47.425 request: 00:17:47.425 { 00:17:47.425 "base_bdev": "BaseBdev1", 00:17:47.425 "raid_bdev": "raid_bdev1", 00:17:47.425 "method": "bdev_raid_add_base_bdev", 00:17:47.425 "req_id": 1 00:17:47.425 } 00:17:47.425 Got JSON-RPC error response 00:17:47.425 response: 00:17:47.425 { 00:17:47.425 "code": -22, 00:17:47.425 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:47.425 } 00:17:47.425 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:47.425 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:47.425 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:47.425 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:47.425 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:47.425 13:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.362 "name": "raid_bdev1", 00:17:48.362 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:48.362 "strip_size_kb": 0, 00:17:48.362 "state": "online", 00:17:48.362 "raid_level": "raid1", 00:17:48.362 "superblock": true, 00:17:48.362 "num_base_bdevs": 2, 00:17:48.362 "num_base_bdevs_discovered": 1, 00:17:48.362 "num_base_bdevs_operational": 1, 00:17:48.362 "base_bdevs_list": [ 00:17:48.362 { 00:17:48.362 "name": null, 00:17:48.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.362 "is_configured": false, 00:17:48.362 "data_offset": 0, 00:17:48.362 "data_size": 7936 00:17:48.362 }, 00:17:48.362 { 00:17:48.362 "name": "BaseBdev2", 00:17:48.362 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:48.362 "is_configured": true, 00:17:48.362 "data_offset": 256, 00:17:48.362 "data_size": 7936 00:17:48.362 } 00:17:48.362 ] 00:17:48.362 }' 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.362 13:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.931 "name": "raid_bdev1", 00:17:48.931 "uuid": "db105458-06ff-4ea4-af4f-766483dd320e", 00:17:48.931 "strip_size_kb": 0, 00:17:48.931 "state": "online", 00:17:48.931 "raid_level": "raid1", 00:17:48.931 "superblock": true, 00:17:48.931 "num_base_bdevs": 2, 00:17:48.931 "num_base_bdevs_discovered": 1, 00:17:48.931 "num_base_bdevs_operational": 1, 00:17:48.931 "base_bdevs_list": [ 00:17:48.931 { 00:17:48.931 "name": null, 00:17:48.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.931 "is_configured": false, 00:17:48.931 "data_offset": 0, 00:17:48.931 "data_size": 7936 00:17:48.931 }, 00:17:48.931 { 00:17:48.931 "name": "BaseBdev2", 00:17:48.931 "uuid": "7d646b3b-94ef-5d2b-99ef-14aacf9e3158", 00:17:48.931 "is_configured": true, 00:17:48.931 "data_offset": 256, 00:17:48.931 "data_size": 7936 00:17:48.931 } 00:17:48.931 ] 00:17:48.931 }' 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87431 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87431 ']' 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87431 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87431 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:48.931 killing process with pid 87431 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87431' 00:17:48.931 Received shutdown signal, test time was about 60.000000 seconds 00:17:48.931 00:17:48.931 Latency(us) 00:17:48.931 [2024-11-26T13:30:37.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.931 [2024-11-26T13:30:37.501Z] =================================================================================================================== 00:17:48.931 [2024-11-26T13:30:37.501Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87431 00:17:48.931 [2024-11-26 13:30:37.449641] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:48.931 [2024-11-26 13:30:37.449736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.931 13:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87431 00:17:48.931 [2024-11-26 13:30:37.449780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.931 [2024-11-26 13:30:37.449796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:49.189 [2024-11-26 13:30:37.671029] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:50.125 13:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:50.125 00:17:50.125 real 0m20.617s 00:17:50.125 user 0m28.129s 00:17:50.125 sys 0m2.239s 00:17:50.125 13:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.125 ************************************ 00:17:50.125 END TEST raid_rebuild_test_sb_md_separate 00:17:50.125 ************************************ 00:17:50.125 13:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.125 13:30:38 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:50.125 13:30:38 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:50.125 13:30:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:50.125 13:30:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.125 13:30:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:50.125 ************************************ 00:17:50.125 START TEST raid_state_function_test_sb_md_interleaved 00:17:50.125 ************************************ 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88123 00:17:50.125 Process raid pid: 88123 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88123' 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88123 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88123 ']' 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.125 13:30:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.125 [2024-11-26 13:30:38.671785] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:17:50.125 [2024-11-26 13:30:38.671968] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.384 [2024-11-26 13:30:38.855414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.643 [2024-11-26 13:30:38.955129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.643 [2024-11-26 13:30:39.124498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.643 [2024-11-26 13:30:39.124540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.210 [2024-11-26 13:30:39.629330] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:51.210 [2024-11-26 13:30:39.629392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:51.210 [2024-11-26 13:30:39.629406] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:51.210 [2024-11-26 13:30:39.629420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.210 "name": "Existed_Raid", 00:17:51.210 "uuid": "e17a0e94-b531-4f7f-8517-6fab39139593", 00:17:51.210 "strip_size_kb": 0, 00:17:51.210 "state": "configuring", 00:17:51.210 "raid_level": "raid1", 00:17:51.210 "superblock": true, 00:17:51.210 "num_base_bdevs": 2, 00:17:51.210 "num_base_bdevs_discovered": 0, 00:17:51.210 "num_base_bdevs_operational": 2, 00:17:51.210 "base_bdevs_list": [ 00:17:51.210 { 00:17:51.210 "name": "BaseBdev1", 00:17:51.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.210 "is_configured": false, 00:17:51.210 "data_offset": 0, 00:17:51.210 "data_size": 0 00:17:51.210 }, 00:17:51.210 { 00:17:51.210 "name": "BaseBdev2", 00:17:51.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.210 "is_configured": false, 00:17:51.210 "data_offset": 0, 00:17:51.210 "data_size": 0 00:17:51.210 } 00:17:51.210 ] 00:17:51.210 }' 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.210 13:30:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.778 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:51.778 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.778 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.778 [2024-11-26 13:30:40.133379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:51.778 [2024-11-26 13:30:40.133423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:51.778 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.778 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:51.778 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.778 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.779 [2024-11-26 13:30:40.141384] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:51.779 [2024-11-26 13:30:40.141425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:51.779 [2024-11-26 13:30:40.141436] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:51.779 [2024-11-26 13:30:40.141451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.779 [2024-11-26 13:30:40.179641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.779 BaseBdev1 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.779 [ 00:17:51.779 { 00:17:51.779 "name": "BaseBdev1", 00:17:51.779 "aliases": [ 00:17:51.779 "cf8239c6-a261-49c3-a784-03f148aca616" 00:17:51.779 ], 00:17:51.779 "product_name": "Malloc disk", 00:17:51.779 "block_size": 4128, 00:17:51.779 "num_blocks": 8192, 00:17:51.779 "uuid": "cf8239c6-a261-49c3-a784-03f148aca616", 00:17:51.779 "md_size": 32, 00:17:51.779 "md_interleave": true, 00:17:51.779 "dif_type": 0, 00:17:51.779 "assigned_rate_limits": { 00:17:51.779 "rw_ios_per_sec": 0, 00:17:51.779 "rw_mbytes_per_sec": 0, 00:17:51.779 "r_mbytes_per_sec": 0, 00:17:51.779 "w_mbytes_per_sec": 0 00:17:51.779 }, 00:17:51.779 "claimed": true, 00:17:51.779 "claim_type": "exclusive_write", 00:17:51.779 "zoned": false, 00:17:51.779 "supported_io_types": { 00:17:51.779 "read": true, 00:17:51.779 "write": true, 00:17:51.779 "unmap": true, 00:17:51.779 "flush": true, 00:17:51.779 "reset": true, 00:17:51.779 "nvme_admin": false, 00:17:51.779 "nvme_io": false, 00:17:51.779 "nvme_io_md": false, 00:17:51.779 "write_zeroes": true, 00:17:51.779 "zcopy": true, 00:17:51.779 "get_zone_info": false, 00:17:51.779 "zone_management": false, 00:17:51.779 "zone_append": false, 00:17:51.779 "compare": false, 00:17:51.779 "compare_and_write": false, 00:17:51.779 "abort": true, 00:17:51.779 "seek_hole": false, 00:17:51.779 "seek_data": false, 00:17:51.779 "copy": true, 00:17:51.779 "nvme_iov_md": false 00:17:51.779 }, 00:17:51.779 "memory_domains": [ 00:17:51.779 { 00:17:51.779 "dma_device_id": "system", 00:17:51.779 "dma_device_type": 1 00:17:51.779 }, 00:17:51.779 { 00:17:51.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.779 "dma_device_type": 2 00:17:51.779 } 00:17:51.779 ], 00:17:51.779 "driver_specific": {} 00:17:51.779 } 00:17:51.779 ] 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.779 "name": "Existed_Raid", 00:17:51.779 "uuid": "7cdd5df9-5dec-468b-9097-9c61a5ff0520", 00:17:51.779 "strip_size_kb": 0, 00:17:51.779 "state": "configuring", 00:17:51.779 "raid_level": "raid1", 00:17:51.779 "superblock": true, 00:17:51.779 "num_base_bdevs": 2, 00:17:51.779 "num_base_bdevs_discovered": 1, 00:17:51.779 "num_base_bdevs_operational": 2, 00:17:51.779 "base_bdevs_list": [ 00:17:51.779 { 00:17:51.779 "name": "BaseBdev1", 00:17:51.779 "uuid": "cf8239c6-a261-49c3-a784-03f148aca616", 00:17:51.779 "is_configured": true, 00:17:51.779 "data_offset": 256, 00:17:51.779 "data_size": 7936 00:17:51.779 }, 00:17:51.779 { 00:17:51.779 "name": "BaseBdev2", 00:17:51.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.779 "is_configured": false, 00:17:51.779 "data_offset": 0, 00:17:51.779 "data_size": 0 00:17:51.779 } 00:17:51.779 ] 00:17:51.779 }' 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.779 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.346 [2024-11-26 13:30:40.735793] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:52.346 [2024-11-26 13:30:40.735831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.346 [2024-11-26 13:30:40.743856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.346 [2024-11-26 13:30:40.745862] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.346 [2024-11-26 13:30:40.745905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.346 "name": "Existed_Raid", 00:17:52.346 "uuid": "37f0e5e3-fc68-4fab-968f-6bef799dcb5a", 00:17:52.346 "strip_size_kb": 0, 00:17:52.346 "state": "configuring", 00:17:52.346 "raid_level": "raid1", 00:17:52.346 "superblock": true, 00:17:52.346 "num_base_bdevs": 2, 00:17:52.346 "num_base_bdevs_discovered": 1, 00:17:52.346 "num_base_bdevs_operational": 2, 00:17:52.346 "base_bdevs_list": [ 00:17:52.346 { 00:17:52.346 "name": "BaseBdev1", 00:17:52.346 "uuid": "cf8239c6-a261-49c3-a784-03f148aca616", 00:17:52.346 "is_configured": true, 00:17:52.346 "data_offset": 256, 00:17:52.346 "data_size": 7936 00:17:52.346 }, 00:17:52.346 { 00:17:52.346 "name": "BaseBdev2", 00:17:52.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.346 "is_configured": false, 00:17:52.346 "data_offset": 0, 00:17:52.346 "data_size": 0 00:17:52.346 } 00:17:52.346 ] 00:17:52.346 }' 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.346 13:30:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.915 [2024-11-26 13:30:41.312595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:52.915 [2024-11-26 13:30:41.312796] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:52.915 [2024-11-26 13:30:41.312813] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:52.915 [2024-11-26 13:30:41.312918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:52.915 [2024-11-26 13:30:41.313019] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:52.915 [2024-11-26 13:30:41.313036] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:52.915 BaseBdev2 00:17:52.915 [2024-11-26 13:30:41.313109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.915 [ 00:17:52.915 { 00:17:52.915 "name": "BaseBdev2", 00:17:52.915 "aliases": [ 00:17:52.915 "a38d59a9-b108-4b20-a204-b50f8bfa92cd" 00:17:52.915 ], 00:17:52.915 "product_name": "Malloc disk", 00:17:52.915 "block_size": 4128, 00:17:52.915 "num_blocks": 8192, 00:17:52.915 "uuid": "a38d59a9-b108-4b20-a204-b50f8bfa92cd", 00:17:52.915 "md_size": 32, 00:17:52.915 "md_interleave": true, 00:17:52.915 "dif_type": 0, 00:17:52.915 "assigned_rate_limits": { 00:17:52.915 "rw_ios_per_sec": 0, 00:17:52.915 "rw_mbytes_per_sec": 0, 00:17:52.915 "r_mbytes_per_sec": 0, 00:17:52.915 "w_mbytes_per_sec": 0 00:17:52.915 }, 00:17:52.915 "claimed": true, 00:17:52.915 "claim_type": "exclusive_write", 00:17:52.915 "zoned": false, 00:17:52.915 "supported_io_types": { 00:17:52.915 "read": true, 00:17:52.915 "write": true, 00:17:52.915 "unmap": true, 00:17:52.915 "flush": true, 00:17:52.915 "reset": true, 00:17:52.915 "nvme_admin": false, 00:17:52.915 "nvme_io": false, 00:17:52.915 "nvme_io_md": false, 00:17:52.915 "write_zeroes": true, 00:17:52.915 "zcopy": true, 00:17:52.915 "get_zone_info": false, 00:17:52.915 "zone_management": false, 00:17:52.915 "zone_append": false, 00:17:52.915 "compare": false, 00:17:52.915 "compare_and_write": false, 00:17:52.915 "abort": true, 00:17:52.915 "seek_hole": false, 00:17:52.915 "seek_data": false, 00:17:52.915 "copy": true, 00:17:52.915 "nvme_iov_md": false 00:17:52.915 }, 00:17:52.915 "memory_domains": [ 00:17:52.915 { 00:17:52.915 "dma_device_id": "system", 00:17:52.915 "dma_device_type": 1 00:17:52.915 }, 00:17:52.915 { 00:17:52.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.915 "dma_device_type": 2 00:17:52.915 } 00:17:52.915 ], 00:17:52.915 "driver_specific": {} 00:17:52.915 } 00:17:52.915 ] 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.915 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.915 "name": "Existed_Raid", 00:17:52.915 "uuid": "37f0e5e3-fc68-4fab-968f-6bef799dcb5a", 00:17:52.916 "strip_size_kb": 0, 00:17:52.916 "state": "online", 00:17:52.916 "raid_level": "raid1", 00:17:52.916 "superblock": true, 00:17:52.916 "num_base_bdevs": 2, 00:17:52.916 "num_base_bdevs_discovered": 2, 00:17:52.916 "num_base_bdevs_operational": 2, 00:17:52.916 "base_bdevs_list": [ 00:17:52.916 { 00:17:52.916 "name": "BaseBdev1", 00:17:52.916 "uuid": "cf8239c6-a261-49c3-a784-03f148aca616", 00:17:52.916 "is_configured": true, 00:17:52.916 "data_offset": 256, 00:17:52.916 "data_size": 7936 00:17:52.916 }, 00:17:52.916 { 00:17:52.916 "name": "BaseBdev2", 00:17:52.916 "uuid": "a38d59a9-b108-4b20-a204-b50f8bfa92cd", 00:17:52.916 "is_configured": true, 00:17:52.916 "data_offset": 256, 00:17:52.916 "data_size": 7936 00:17:52.916 } 00:17:52.916 ] 00:17:52.916 }' 00:17:52.916 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.916 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.483 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:53.483 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:53.483 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:53.483 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:53.483 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:53.483 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:53.483 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:53.483 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.483 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.483 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:53.483 [2024-11-26 13:30:41.869664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.483 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.483 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:53.483 "name": "Existed_Raid", 00:17:53.483 "aliases": [ 00:17:53.483 "37f0e5e3-fc68-4fab-968f-6bef799dcb5a" 00:17:53.483 ], 00:17:53.483 "product_name": "Raid Volume", 00:17:53.483 "block_size": 4128, 00:17:53.483 "num_blocks": 7936, 00:17:53.483 "uuid": "37f0e5e3-fc68-4fab-968f-6bef799dcb5a", 00:17:53.483 "md_size": 32, 00:17:53.483 "md_interleave": true, 00:17:53.483 "dif_type": 0, 00:17:53.483 "assigned_rate_limits": { 00:17:53.483 "rw_ios_per_sec": 0, 00:17:53.483 "rw_mbytes_per_sec": 0, 00:17:53.483 "r_mbytes_per_sec": 0, 00:17:53.483 "w_mbytes_per_sec": 0 00:17:53.483 }, 00:17:53.483 "claimed": false, 00:17:53.483 "zoned": false, 00:17:53.483 "supported_io_types": { 00:17:53.484 "read": true, 00:17:53.484 "write": true, 00:17:53.484 "unmap": false, 00:17:53.484 "flush": false, 00:17:53.484 "reset": true, 00:17:53.484 "nvme_admin": false, 00:17:53.484 "nvme_io": false, 00:17:53.484 "nvme_io_md": false, 00:17:53.484 "write_zeroes": true, 00:17:53.484 "zcopy": false, 00:17:53.484 "get_zone_info": false, 00:17:53.484 "zone_management": false, 00:17:53.484 "zone_append": false, 00:17:53.484 "compare": false, 00:17:53.484 "compare_and_write": false, 00:17:53.484 "abort": false, 00:17:53.484 "seek_hole": false, 00:17:53.484 "seek_data": false, 00:17:53.484 "copy": false, 00:17:53.484 "nvme_iov_md": false 00:17:53.484 }, 00:17:53.484 "memory_domains": [ 00:17:53.484 { 00:17:53.484 "dma_device_id": "system", 00:17:53.484 "dma_device_type": 1 00:17:53.484 }, 00:17:53.484 { 00:17:53.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.484 "dma_device_type": 2 00:17:53.484 }, 00:17:53.484 { 00:17:53.484 "dma_device_id": "system", 00:17:53.484 "dma_device_type": 1 00:17:53.484 }, 00:17:53.484 { 00:17:53.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.484 "dma_device_type": 2 00:17:53.484 } 00:17:53.484 ], 00:17:53.484 "driver_specific": { 00:17:53.484 "raid": { 00:17:53.484 "uuid": "37f0e5e3-fc68-4fab-968f-6bef799dcb5a", 00:17:53.484 "strip_size_kb": 0, 00:17:53.484 "state": "online", 00:17:53.484 "raid_level": "raid1", 00:17:53.484 "superblock": true, 00:17:53.484 "num_base_bdevs": 2, 00:17:53.484 "num_base_bdevs_discovered": 2, 00:17:53.484 "num_base_bdevs_operational": 2, 00:17:53.484 "base_bdevs_list": [ 00:17:53.484 { 00:17:53.484 "name": "BaseBdev1", 00:17:53.484 "uuid": "cf8239c6-a261-49c3-a784-03f148aca616", 00:17:53.484 "is_configured": true, 00:17:53.484 "data_offset": 256, 00:17:53.484 "data_size": 7936 00:17:53.484 }, 00:17:53.484 { 00:17:53.484 "name": "BaseBdev2", 00:17:53.484 "uuid": "a38d59a9-b108-4b20-a204-b50f8bfa92cd", 00:17:53.484 "is_configured": true, 00:17:53.484 "data_offset": 256, 00:17:53.484 "data_size": 7936 00:17:53.484 } 00:17:53.484 ] 00:17:53.484 } 00:17:53.484 } 00:17:53.484 }' 00:17:53.484 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:53.484 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:53.484 BaseBdev2' 00:17:53.484 13:30:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.484 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:53.484 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.484 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:53.484 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.484 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.484 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.743 [2024-11-26 13:30:42.141351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:53.743 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.744 "name": "Existed_Raid", 00:17:53.744 "uuid": "37f0e5e3-fc68-4fab-968f-6bef799dcb5a", 00:17:53.744 "strip_size_kb": 0, 00:17:53.744 "state": "online", 00:17:53.744 "raid_level": "raid1", 00:17:53.744 "superblock": true, 00:17:53.744 "num_base_bdevs": 2, 00:17:53.744 "num_base_bdevs_discovered": 1, 00:17:53.744 "num_base_bdevs_operational": 1, 00:17:53.744 "base_bdevs_list": [ 00:17:53.744 { 00:17:53.744 "name": null, 00:17:53.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.744 "is_configured": false, 00:17:53.744 "data_offset": 0, 00:17:53.744 "data_size": 7936 00:17:53.744 }, 00:17:53.744 { 00:17:53.744 "name": "BaseBdev2", 00:17:53.744 "uuid": "a38d59a9-b108-4b20-a204-b50f8bfa92cd", 00:17:53.744 "is_configured": true, 00:17:53.744 "data_offset": 256, 00:17:53.744 "data_size": 7936 00:17:53.744 } 00:17:53.744 ] 00:17:53.744 }' 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.744 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.312 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:54.312 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:54.312 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.312 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.312 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.312 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:54.312 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.312 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:54.312 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:54.312 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:54.312 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.312 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.312 [2024-11-26 13:30:42.794048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:54.312 [2024-11-26 13:30:42.794165] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:54.312 [2024-11-26 13:30:42.859181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.313 [2024-11-26 13:30:42.859250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.313 [2024-11-26 13:30:42.859270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:54.313 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.313 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:54.313 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:54.313 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.313 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:54.313 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.313 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.313 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.572 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:54.572 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:54.572 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:54.572 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88123 00:17:54.572 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88123 ']' 00:17:54.572 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88123 00:17:54.572 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:54.572 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.572 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88123 00:17:54.572 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.572 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.572 killing process with pid 88123 00:17:54.572 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88123' 00:17:54.572 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88123 00:17:54.572 [2024-11-26 13:30:42.944957] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:54.572 13:30:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88123 00:17:54.572 [2024-11-26 13:30:42.957248] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.509 13:30:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:55.509 00:17:55.509 real 0m5.241s 00:17:55.509 user 0m8.092s 00:17:55.509 sys 0m0.777s 00:17:55.509 13:30:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.509 13:30:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.509 ************************************ 00:17:55.509 END TEST raid_state_function_test_sb_md_interleaved 00:17:55.509 ************************************ 00:17:55.509 13:30:43 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:55.509 13:30:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:55.509 13:30:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.509 13:30:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.509 ************************************ 00:17:55.509 START TEST raid_superblock_test_md_interleaved 00:17:55.509 ************************************ 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88375 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88375 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88375 ']' 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.509 13:30:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.509 [2024-11-26 13:30:43.966899] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:17:55.509 [2024-11-26 13:30:43.967084] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88375 ] 00:17:55.768 [2024-11-26 13:30:44.145648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.768 [2024-11-26 13:30:44.243150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.027 [2024-11-26 13:30:44.409797] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.027 [2024-11-26 13:30:44.409860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.596 malloc1 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.596 [2024-11-26 13:30:44.899159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:56.596 [2024-11-26 13:30:44.899228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.596 [2024-11-26 13:30:44.899274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:56.596 [2024-11-26 13:30:44.899288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.596 [2024-11-26 13:30:44.901442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.596 [2024-11-26 13:30:44.901480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:56.596 pt1 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.596 malloc2 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.596 [2024-11-26 13:30:44.949048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:56.596 [2024-11-26 13:30:44.949104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.596 [2024-11-26 13:30:44.949131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:56.596 [2024-11-26 13:30:44.949143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.596 [2024-11-26 13:30:44.951358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.596 [2024-11-26 13:30:44.951394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:56.596 pt2 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.596 [2024-11-26 13:30:44.961108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:56.596 [2024-11-26 13:30:44.963242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:56.596 [2024-11-26 13:30:44.963467] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:56.596 [2024-11-26 13:30:44.963485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:56.596 [2024-11-26 13:30:44.963604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:56.596 [2024-11-26 13:30:44.963692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:56.596 [2024-11-26 13:30:44.963710] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:56.596 [2024-11-26 13:30:44.963813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.596 13:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.596 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.596 "name": "raid_bdev1", 00:17:56.596 "uuid": "78916082-7b85-457b-abd2-9041d93e25ed", 00:17:56.596 "strip_size_kb": 0, 00:17:56.596 "state": "online", 00:17:56.596 "raid_level": "raid1", 00:17:56.596 "superblock": true, 00:17:56.596 "num_base_bdevs": 2, 00:17:56.596 "num_base_bdevs_discovered": 2, 00:17:56.596 "num_base_bdevs_operational": 2, 00:17:56.596 "base_bdevs_list": [ 00:17:56.596 { 00:17:56.596 "name": "pt1", 00:17:56.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:56.596 "is_configured": true, 00:17:56.596 "data_offset": 256, 00:17:56.596 "data_size": 7936 00:17:56.596 }, 00:17:56.596 { 00:17:56.596 "name": "pt2", 00:17:56.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.596 "is_configured": true, 00:17:56.596 "data_offset": 256, 00:17:56.596 "data_size": 7936 00:17:56.596 } 00:17:56.596 ] 00:17:56.596 }' 00:17:56.596 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.596 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.164 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:57.164 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:57.164 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:57.164 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:57.164 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:57.164 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:57.164 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.164 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:57.164 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.164 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.164 [2024-11-26 13:30:45.461489] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:57.165 "name": "raid_bdev1", 00:17:57.165 "aliases": [ 00:17:57.165 "78916082-7b85-457b-abd2-9041d93e25ed" 00:17:57.165 ], 00:17:57.165 "product_name": "Raid Volume", 00:17:57.165 "block_size": 4128, 00:17:57.165 "num_blocks": 7936, 00:17:57.165 "uuid": "78916082-7b85-457b-abd2-9041d93e25ed", 00:17:57.165 "md_size": 32, 00:17:57.165 "md_interleave": true, 00:17:57.165 "dif_type": 0, 00:17:57.165 "assigned_rate_limits": { 00:17:57.165 "rw_ios_per_sec": 0, 00:17:57.165 "rw_mbytes_per_sec": 0, 00:17:57.165 "r_mbytes_per_sec": 0, 00:17:57.165 "w_mbytes_per_sec": 0 00:17:57.165 }, 00:17:57.165 "claimed": false, 00:17:57.165 "zoned": false, 00:17:57.165 "supported_io_types": { 00:17:57.165 "read": true, 00:17:57.165 "write": true, 00:17:57.165 "unmap": false, 00:17:57.165 "flush": false, 00:17:57.165 "reset": true, 00:17:57.165 "nvme_admin": false, 00:17:57.165 "nvme_io": false, 00:17:57.165 "nvme_io_md": false, 00:17:57.165 "write_zeroes": true, 00:17:57.165 "zcopy": false, 00:17:57.165 "get_zone_info": false, 00:17:57.165 "zone_management": false, 00:17:57.165 "zone_append": false, 00:17:57.165 "compare": false, 00:17:57.165 "compare_and_write": false, 00:17:57.165 "abort": false, 00:17:57.165 "seek_hole": false, 00:17:57.165 "seek_data": false, 00:17:57.165 "copy": false, 00:17:57.165 "nvme_iov_md": false 00:17:57.165 }, 00:17:57.165 "memory_domains": [ 00:17:57.165 { 00:17:57.165 "dma_device_id": "system", 00:17:57.165 "dma_device_type": 1 00:17:57.165 }, 00:17:57.165 { 00:17:57.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.165 "dma_device_type": 2 00:17:57.165 }, 00:17:57.165 { 00:17:57.165 "dma_device_id": "system", 00:17:57.165 "dma_device_type": 1 00:17:57.165 }, 00:17:57.165 { 00:17:57.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.165 "dma_device_type": 2 00:17:57.165 } 00:17:57.165 ], 00:17:57.165 "driver_specific": { 00:17:57.165 "raid": { 00:17:57.165 "uuid": "78916082-7b85-457b-abd2-9041d93e25ed", 00:17:57.165 "strip_size_kb": 0, 00:17:57.165 "state": "online", 00:17:57.165 "raid_level": "raid1", 00:17:57.165 "superblock": true, 00:17:57.165 "num_base_bdevs": 2, 00:17:57.165 "num_base_bdevs_discovered": 2, 00:17:57.165 "num_base_bdevs_operational": 2, 00:17:57.165 "base_bdevs_list": [ 00:17:57.165 { 00:17:57.165 "name": "pt1", 00:17:57.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.165 "is_configured": true, 00:17:57.165 "data_offset": 256, 00:17:57.165 "data_size": 7936 00:17:57.165 }, 00:17:57.165 { 00:17:57.165 "name": "pt2", 00:17:57.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.165 "is_configured": true, 00:17:57.165 "data_offset": 256, 00:17:57.165 "data_size": 7936 00:17:57.165 } 00:17:57.165 ] 00:17:57.165 } 00:17:57.165 } 00:17:57.165 }' 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:57.165 pt2' 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.165 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:57.165 [2024-11-26 13:30:45.717452] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=78916082-7b85-457b-abd2-9041d93e25ed 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 78916082-7b85-457b-abd2-9041d93e25ed ']' 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.425 [2024-11-26 13:30:45.769187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.425 [2024-11-26 13:30:45.769212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.425 [2024-11-26 13:30:45.769314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.425 [2024-11-26 13:30:45.769372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.425 [2024-11-26 13:30:45.769390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.425 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.426 [2024-11-26 13:30:45.913227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:57.426 [2024-11-26 13:30:45.915507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:57.426 [2024-11-26 13:30:45.915587] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:57.426 [2024-11-26 13:30:45.915647] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:57.426 [2024-11-26 13:30:45.915670] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.426 [2024-11-26 13:30:45.915682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:57.426 request: 00:17:57.426 { 00:17:57.426 "name": "raid_bdev1", 00:17:57.426 "raid_level": "raid1", 00:17:57.426 "base_bdevs": [ 00:17:57.426 "malloc1", 00:17:57.426 "malloc2" 00:17:57.426 ], 00:17:57.426 "superblock": false, 00:17:57.426 "method": "bdev_raid_create", 00:17:57.426 "req_id": 1 00:17:57.426 } 00:17:57.426 Got JSON-RPC error response 00:17:57.426 response: 00:17:57.426 { 00:17:57.426 "code": -17, 00:17:57.426 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:57.426 } 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.426 [2024-11-26 13:30:45.981244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:57.426 [2024-11-26 13:30:45.981326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.426 [2024-11-26 13:30:45.981346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:57.426 [2024-11-26 13:30:45.981360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.426 [2024-11-26 13:30:45.983586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.426 [2024-11-26 13:30:45.983627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:57.426 [2024-11-26 13:30:45.983675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:57.426 [2024-11-26 13:30:45.983736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:57.426 pt1 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.426 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.685 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.685 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.685 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.685 13:30:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.685 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.685 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.685 "name": "raid_bdev1", 00:17:57.685 "uuid": "78916082-7b85-457b-abd2-9041d93e25ed", 00:17:57.685 "strip_size_kb": 0, 00:17:57.685 "state": "configuring", 00:17:57.685 "raid_level": "raid1", 00:17:57.685 "superblock": true, 00:17:57.685 "num_base_bdevs": 2, 00:17:57.685 "num_base_bdevs_discovered": 1, 00:17:57.685 "num_base_bdevs_operational": 2, 00:17:57.685 "base_bdevs_list": [ 00:17:57.685 { 00:17:57.685 "name": "pt1", 00:17:57.685 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.685 "is_configured": true, 00:17:57.685 "data_offset": 256, 00:17:57.685 "data_size": 7936 00:17:57.685 }, 00:17:57.685 { 00:17:57.685 "name": null, 00:17:57.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.685 "is_configured": false, 00:17:57.685 "data_offset": 256, 00:17:57.685 "data_size": 7936 00:17:57.685 } 00:17:57.685 ] 00:17:57.685 }' 00:17:57.685 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.685 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.285 [2024-11-26 13:30:46.513412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.285 [2024-11-26 13:30:46.513468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.285 [2024-11-26 13:30:46.513492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:58.285 [2024-11-26 13:30:46.513507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.285 [2024-11-26 13:30:46.513651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.285 [2024-11-26 13:30:46.513674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.285 [2024-11-26 13:30:46.513716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:58.285 [2024-11-26 13:30:46.513751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.285 [2024-11-26 13:30:46.513852] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:58.285 [2024-11-26 13:30:46.513870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:58.285 [2024-11-26 13:30:46.513939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:58.285 [2024-11-26 13:30:46.514023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:58.285 [2024-11-26 13:30:46.514044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:58.285 [2024-11-26 13:30:46.514108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.285 pt2 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.285 "name": "raid_bdev1", 00:17:58.285 "uuid": "78916082-7b85-457b-abd2-9041d93e25ed", 00:17:58.285 "strip_size_kb": 0, 00:17:58.285 "state": "online", 00:17:58.285 "raid_level": "raid1", 00:17:58.285 "superblock": true, 00:17:58.285 "num_base_bdevs": 2, 00:17:58.285 "num_base_bdevs_discovered": 2, 00:17:58.285 "num_base_bdevs_operational": 2, 00:17:58.285 "base_bdevs_list": [ 00:17:58.285 { 00:17:58.285 "name": "pt1", 00:17:58.285 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.285 "is_configured": true, 00:17:58.285 "data_offset": 256, 00:17:58.285 "data_size": 7936 00:17:58.285 }, 00:17:58.285 { 00:17:58.285 "name": "pt2", 00:17:58.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.285 "is_configured": true, 00:17:58.285 "data_offset": 256, 00:17:58.285 "data_size": 7936 00:17:58.285 } 00:17:58.285 ] 00:17:58.285 }' 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.285 13:30:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.544 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:58.544 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:58.544 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:58.544 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:58.544 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:58.544 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:58.544 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:58.544 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.544 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.544 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:58.544 [2024-11-26 13:30:47.053755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.544 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.544 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:58.544 "name": "raid_bdev1", 00:17:58.544 "aliases": [ 00:17:58.544 "78916082-7b85-457b-abd2-9041d93e25ed" 00:17:58.544 ], 00:17:58.544 "product_name": "Raid Volume", 00:17:58.544 "block_size": 4128, 00:17:58.544 "num_blocks": 7936, 00:17:58.544 "uuid": "78916082-7b85-457b-abd2-9041d93e25ed", 00:17:58.544 "md_size": 32, 00:17:58.544 "md_interleave": true, 00:17:58.544 "dif_type": 0, 00:17:58.544 "assigned_rate_limits": { 00:17:58.544 "rw_ios_per_sec": 0, 00:17:58.544 "rw_mbytes_per_sec": 0, 00:17:58.544 "r_mbytes_per_sec": 0, 00:17:58.544 "w_mbytes_per_sec": 0 00:17:58.544 }, 00:17:58.544 "claimed": false, 00:17:58.544 "zoned": false, 00:17:58.544 "supported_io_types": { 00:17:58.544 "read": true, 00:17:58.544 "write": true, 00:17:58.544 "unmap": false, 00:17:58.544 "flush": false, 00:17:58.544 "reset": true, 00:17:58.544 "nvme_admin": false, 00:17:58.544 "nvme_io": false, 00:17:58.544 "nvme_io_md": false, 00:17:58.544 "write_zeroes": true, 00:17:58.544 "zcopy": false, 00:17:58.544 "get_zone_info": false, 00:17:58.544 "zone_management": false, 00:17:58.544 "zone_append": false, 00:17:58.544 "compare": false, 00:17:58.544 "compare_and_write": false, 00:17:58.544 "abort": false, 00:17:58.544 "seek_hole": false, 00:17:58.544 "seek_data": false, 00:17:58.544 "copy": false, 00:17:58.544 "nvme_iov_md": false 00:17:58.544 }, 00:17:58.544 "memory_domains": [ 00:17:58.544 { 00:17:58.544 "dma_device_id": "system", 00:17:58.544 "dma_device_type": 1 00:17:58.544 }, 00:17:58.544 { 00:17:58.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.544 "dma_device_type": 2 00:17:58.544 }, 00:17:58.544 { 00:17:58.544 "dma_device_id": "system", 00:17:58.544 "dma_device_type": 1 00:17:58.544 }, 00:17:58.544 { 00:17:58.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.544 "dma_device_type": 2 00:17:58.544 } 00:17:58.544 ], 00:17:58.544 "driver_specific": { 00:17:58.544 "raid": { 00:17:58.544 "uuid": "78916082-7b85-457b-abd2-9041d93e25ed", 00:17:58.544 "strip_size_kb": 0, 00:17:58.544 "state": "online", 00:17:58.544 "raid_level": "raid1", 00:17:58.544 "superblock": true, 00:17:58.544 "num_base_bdevs": 2, 00:17:58.544 "num_base_bdevs_discovered": 2, 00:17:58.544 "num_base_bdevs_operational": 2, 00:17:58.544 "base_bdevs_list": [ 00:17:58.544 { 00:17:58.544 "name": "pt1", 00:17:58.544 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.544 "is_configured": true, 00:17:58.544 "data_offset": 256, 00:17:58.544 "data_size": 7936 00:17:58.544 }, 00:17:58.544 { 00:17:58.544 "name": "pt2", 00:17:58.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.544 "is_configured": true, 00:17:58.544 "data_offset": 256, 00:17:58.544 "data_size": 7936 00:17:58.544 } 00:17:58.544 ] 00:17:58.544 } 00:17:58.544 } 00:17:58.544 }' 00:17:58.544 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:58.803 pt2' 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.803 [2024-11-26 13:30:47.317829] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 78916082-7b85-457b-abd2-9041d93e25ed '!=' 78916082-7b85-457b-abd2-9041d93e25ed ']' 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:58.803 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:58.804 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:58.804 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:58.804 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.804 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.804 [2024-11-26 13:30:47.361643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:58.804 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.804 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:58.804 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.804 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.804 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.804 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.804 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:58.804 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.804 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.804 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.804 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.062 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.062 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.062 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.062 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.062 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.062 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.062 "name": "raid_bdev1", 00:17:59.062 "uuid": "78916082-7b85-457b-abd2-9041d93e25ed", 00:17:59.062 "strip_size_kb": 0, 00:17:59.062 "state": "online", 00:17:59.062 "raid_level": "raid1", 00:17:59.062 "superblock": true, 00:17:59.062 "num_base_bdevs": 2, 00:17:59.062 "num_base_bdevs_discovered": 1, 00:17:59.062 "num_base_bdevs_operational": 1, 00:17:59.062 "base_bdevs_list": [ 00:17:59.062 { 00:17:59.062 "name": null, 00:17:59.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.062 "is_configured": false, 00:17:59.062 "data_offset": 0, 00:17:59.062 "data_size": 7936 00:17:59.062 }, 00:17:59.062 { 00:17:59.062 "name": "pt2", 00:17:59.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.062 "is_configured": true, 00:17:59.062 "data_offset": 256, 00:17:59.062 "data_size": 7936 00:17:59.062 } 00:17:59.062 ] 00:17:59.062 }' 00:17:59.062 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.062 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.629 [2024-11-26 13:30:47.889729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.629 [2024-11-26 13:30:47.889755] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.629 [2024-11-26 13:30:47.889804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.629 [2024-11-26 13:30:47.889846] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.629 [2024-11-26 13:30:47.889862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.629 [2024-11-26 13:30:47.965764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:59.629 [2024-11-26 13:30:47.965813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.629 [2024-11-26 13:30:47.965830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:59.629 [2024-11-26 13:30:47.965843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.629 [2024-11-26 13:30:47.967933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.629 [2024-11-26 13:30:47.967977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:59.629 [2024-11-26 13:30:47.968025] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:59.629 [2024-11-26 13:30:47.968069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:59.629 [2024-11-26 13:30:47.968132] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:59.629 [2024-11-26 13:30:47.968151] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:59.629 [2024-11-26 13:30:47.968279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:59.629 [2024-11-26 13:30:47.968354] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:59.629 [2024-11-26 13:30:47.968366] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:59.629 [2024-11-26 13:30:47.968432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.629 pt2 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.629 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.630 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.630 13:30:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.630 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.630 "name": "raid_bdev1", 00:17:59.630 "uuid": "78916082-7b85-457b-abd2-9041d93e25ed", 00:17:59.630 "strip_size_kb": 0, 00:17:59.630 "state": "online", 00:17:59.630 "raid_level": "raid1", 00:17:59.630 "superblock": true, 00:17:59.630 "num_base_bdevs": 2, 00:17:59.630 "num_base_bdevs_discovered": 1, 00:17:59.630 "num_base_bdevs_operational": 1, 00:17:59.630 "base_bdevs_list": [ 00:17:59.630 { 00:17:59.630 "name": null, 00:17:59.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.630 "is_configured": false, 00:17:59.630 "data_offset": 256, 00:17:59.630 "data_size": 7936 00:17:59.630 }, 00:17:59.630 { 00:17:59.630 "name": "pt2", 00:17:59.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.630 "is_configured": true, 00:17:59.630 "data_offset": 256, 00:17:59.630 "data_size": 7936 00:17:59.630 } 00:17:59.630 ] 00:17:59.630 }' 00:17:59.630 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.630 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.198 [2024-11-26 13:30:48.489821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.198 [2024-11-26 13:30:48.489849] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.198 [2024-11-26 13:30:48.489895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.198 [2024-11-26 13:30:48.489940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.198 [2024-11-26 13:30:48.489953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.198 [2024-11-26 13:30:48.553874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:00.198 [2024-11-26 13:30:48.553925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.198 [2024-11-26 13:30:48.553948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:00.198 [2024-11-26 13:30:48.553959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.198 [2024-11-26 13:30:48.556201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.198 [2024-11-26 13:30:48.556249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:00.198 [2024-11-26 13:30:48.556319] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:00.198 [2024-11-26 13:30:48.556362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.198 [2024-11-26 13:30:48.556462] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:00.198 [2024-11-26 13:30:48.556478] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.198 [2024-11-26 13:30:48.556513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:00.198 [2024-11-26 13:30:48.556571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.198 [2024-11-26 13:30:48.556678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:00.198 [2024-11-26 13:30:48.556693] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:00.198 [2024-11-26 13:30:48.556757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:00.198 [2024-11-26 13:30:48.556824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:00.198 [2024-11-26 13:30:48.556841] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:00.198 [2024-11-26 13:30:48.556939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.198 pt1 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.198 "name": "raid_bdev1", 00:18:00.198 "uuid": "78916082-7b85-457b-abd2-9041d93e25ed", 00:18:00.198 "strip_size_kb": 0, 00:18:00.198 "state": "online", 00:18:00.198 "raid_level": "raid1", 00:18:00.198 "superblock": true, 00:18:00.198 "num_base_bdevs": 2, 00:18:00.198 "num_base_bdevs_discovered": 1, 00:18:00.198 "num_base_bdevs_operational": 1, 00:18:00.198 "base_bdevs_list": [ 00:18:00.198 { 00:18:00.198 "name": null, 00:18:00.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.198 "is_configured": false, 00:18:00.198 "data_offset": 256, 00:18:00.198 "data_size": 7936 00:18:00.198 }, 00:18:00.198 { 00:18:00.198 "name": "pt2", 00:18:00.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.198 "is_configured": true, 00:18:00.198 "data_offset": 256, 00:18:00.198 "data_size": 7936 00:18:00.198 } 00:18:00.198 ] 00:18:00.198 }' 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.198 13:30:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.766 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:00.766 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:00.766 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.766 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.766 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.766 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:00.766 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:00.766 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:00.766 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.766 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.766 [2024-11-26 13:30:49.150209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.766 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.766 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 78916082-7b85-457b-abd2-9041d93e25ed '!=' 78916082-7b85-457b-abd2-9041d93e25ed ']' 00:18:00.766 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88375 00:18:00.767 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88375 ']' 00:18:00.767 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88375 00:18:00.767 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:00.767 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.767 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88375 00:18:00.767 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.767 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.767 killing process with pid 88375 00:18:00.767 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88375' 00:18:00.767 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88375 00:18:00.767 [2024-11-26 13:30:49.220632] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:00.767 [2024-11-26 13:30:49.220706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.767 13:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88375 00:18:00.767 [2024-11-26 13:30:49.220753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.767 [2024-11-26 13:30:49.220775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:01.025 [2024-11-26 13:30:49.360677] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:01.963 13:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:01.963 00:18:01.963 real 0m6.329s 00:18:01.963 user 0m10.217s 00:18:01.963 sys 0m0.919s 00:18:01.963 13:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.963 13:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.963 ************************************ 00:18:01.963 END TEST raid_superblock_test_md_interleaved 00:18:01.963 ************************************ 00:18:01.963 13:30:50 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:01.963 13:30:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:01.963 13:30:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.963 13:30:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.963 ************************************ 00:18:01.963 START TEST raid_rebuild_test_sb_md_interleaved 00:18:01.963 ************************************ 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88704 00:18:01.963 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88704 00:18:01.964 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:01.964 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88704 ']' 00:18:01.964 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.964 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.964 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.964 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.964 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.964 [2024-11-26 13:30:50.376870] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:18:01.964 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:01.964 Zero copy mechanism will not be used. 00:18:01.964 [2024-11-26 13:30:50.377058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88704 ] 00:18:02.223 [2024-11-26 13:30:50.552268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.223 [2024-11-26 13:30:50.648878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.482 [2024-11-26 13:30:50.817884] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.482 [2024-11-26 13:30:50.817931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.741 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.741 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:02.741 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.741 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:02.741 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.741 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.000 BaseBdev1_malloc 00:18:03.000 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.000 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:03.000 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.001 [2024-11-26 13:30:51.331944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:03.001 [2024-11-26 13:30:51.332043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.001 [2024-11-26 13:30:51.332069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:03.001 [2024-11-26 13:30:51.332086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.001 [2024-11-26 13:30:51.334220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.001 [2024-11-26 13:30:51.334307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:03.001 BaseBdev1 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.001 BaseBdev2_malloc 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.001 [2024-11-26 13:30:51.381228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:03.001 [2024-11-26 13:30:51.381324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.001 [2024-11-26 13:30:51.381351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:03.001 [2024-11-26 13:30:51.381369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.001 [2024-11-26 13:30:51.383876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.001 [2024-11-26 13:30:51.383919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:03.001 BaseBdev2 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.001 spare_malloc 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.001 spare_delay 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.001 [2024-11-26 13:30:51.446374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:03.001 [2024-11-26 13:30:51.446451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.001 [2024-11-26 13:30:51.446478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:03.001 [2024-11-26 13:30:51.446495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.001 [2024-11-26 13:30:51.448733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.001 [2024-11-26 13:30:51.448912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:03.001 spare 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.001 [2024-11-26 13:30:51.454412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:03.001 [2024-11-26 13:30:51.456635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:03.001 [2024-11-26 13:30:51.456846] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:03.001 [2024-11-26 13:30:51.456867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:03.001 [2024-11-26 13:30:51.456953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:03.001 [2024-11-26 13:30:51.457041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:03.001 [2024-11-26 13:30:51.457054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:03.001 [2024-11-26 13:30:51.457131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.001 "name": "raid_bdev1", 00:18:03.001 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:03.001 "strip_size_kb": 0, 00:18:03.001 "state": "online", 00:18:03.001 "raid_level": "raid1", 00:18:03.001 "superblock": true, 00:18:03.001 "num_base_bdevs": 2, 00:18:03.001 "num_base_bdevs_discovered": 2, 00:18:03.001 "num_base_bdevs_operational": 2, 00:18:03.001 "base_bdevs_list": [ 00:18:03.001 { 00:18:03.001 "name": "BaseBdev1", 00:18:03.001 "uuid": "5a4c92dd-3ef4-572d-81c1-dd54ed133239", 00:18:03.001 "is_configured": true, 00:18:03.001 "data_offset": 256, 00:18:03.001 "data_size": 7936 00:18:03.001 }, 00:18:03.001 { 00:18:03.001 "name": "BaseBdev2", 00:18:03.001 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:03.001 "is_configured": true, 00:18:03.001 "data_offset": 256, 00:18:03.001 "data_size": 7936 00:18:03.001 } 00:18:03.001 ] 00:18:03.001 }' 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.001 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.571 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.571 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:03.571 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.571 13:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.571 [2024-11-26 13:30:51.982808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.571 [2024-11-26 13:30:52.090537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.571 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.830 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.830 "name": "raid_bdev1", 00:18:03.830 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:03.830 "strip_size_kb": 0, 00:18:03.830 "state": "online", 00:18:03.830 "raid_level": "raid1", 00:18:03.830 "superblock": true, 00:18:03.830 "num_base_bdevs": 2, 00:18:03.830 "num_base_bdevs_discovered": 1, 00:18:03.830 "num_base_bdevs_operational": 1, 00:18:03.830 "base_bdevs_list": [ 00:18:03.830 { 00:18:03.830 "name": null, 00:18:03.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.830 "is_configured": false, 00:18:03.830 "data_offset": 0, 00:18:03.830 "data_size": 7936 00:18:03.830 }, 00:18:03.830 { 00:18:03.830 "name": "BaseBdev2", 00:18:03.830 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:03.830 "is_configured": true, 00:18:03.830 "data_offset": 256, 00:18:03.830 "data_size": 7936 00:18:03.830 } 00:18:03.830 ] 00:18:03.830 }' 00:18:03.830 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.830 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.089 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:04.089 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.089 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.089 [2024-11-26 13:30:52.614683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.089 [2024-11-26 13:30:52.628890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:04.089 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.089 13:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:04.089 [2024-11-26 13:30:52.631132] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.468 "name": "raid_bdev1", 00:18:05.468 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:05.468 "strip_size_kb": 0, 00:18:05.468 "state": "online", 00:18:05.468 "raid_level": "raid1", 00:18:05.468 "superblock": true, 00:18:05.468 "num_base_bdevs": 2, 00:18:05.468 "num_base_bdevs_discovered": 2, 00:18:05.468 "num_base_bdevs_operational": 2, 00:18:05.468 "process": { 00:18:05.468 "type": "rebuild", 00:18:05.468 "target": "spare", 00:18:05.468 "progress": { 00:18:05.468 "blocks": 2560, 00:18:05.468 "percent": 32 00:18:05.468 } 00:18:05.468 }, 00:18:05.468 "base_bdevs_list": [ 00:18:05.468 { 00:18:05.468 "name": "spare", 00:18:05.468 "uuid": "50e726a2-4cc5-5131-9fd5-1103e0a1b1ea", 00:18:05.468 "is_configured": true, 00:18:05.468 "data_offset": 256, 00:18:05.468 "data_size": 7936 00:18:05.468 }, 00:18:05.468 { 00:18:05.468 "name": "BaseBdev2", 00:18:05.468 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:05.468 "is_configured": true, 00:18:05.468 "data_offset": 256, 00:18:05.468 "data_size": 7936 00:18:05.468 } 00:18:05.468 ] 00:18:05.468 }' 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.468 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.468 [2024-11-26 13:30:53.804844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.468 [2024-11-26 13:30:53.838568] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:05.468 [2024-11-26 13:30:53.838795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.468 [2024-11-26 13:30:53.838822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.469 [2024-11-26 13:30:53.838840] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.469 "name": "raid_bdev1", 00:18:05.469 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:05.469 "strip_size_kb": 0, 00:18:05.469 "state": "online", 00:18:05.469 "raid_level": "raid1", 00:18:05.469 "superblock": true, 00:18:05.469 "num_base_bdevs": 2, 00:18:05.469 "num_base_bdevs_discovered": 1, 00:18:05.469 "num_base_bdevs_operational": 1, 00:18:05.469 "base_bdevs_list": [ 00:18:05.469 { 00:18:05.469 "name": null, 00:18:05.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.469 "is_configured": false, 00:18:05.469 "data_offset": 0, 00:18:05.469 "data_size": 7936 00:18:05.469 }, 00:18:05.469 { 00:18:05.469 "name": "BaseBdev2", 00:18:05.469 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:05.469 "is_configured": true, 00:18:05.469 "data_offset": 256, 00:18:05.469 "data_size": 7936 00:18:05.469 } 00:18:05.469 ] 00:18:05.469 }' 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.469 13:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.068 "name": "raid_bdev1", 00:18:06.068 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:06.068 "strip_size_kb": 0, 00:18:06.068 "state": "online", 00:18:06.068 "raid_level": "raid1", 00:18:06.068 "superblock": true, 00:18:06.068 "num_base_bdevs": 2, 00:18:06.068 "num_base_bdevs_discovered": 1, 00:18:06.068 "num_base_bdevs_operational": 1, 00:18:06.068 "base_bdevs_list": [ 00:18:06.068 { 00:18:06.068 "name": null, 00:18:06.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.068 "is_configured": false, 00:18:06.068 "data_offset": 0, 00:18:06.068 "data_size": 7936 00:18:06.068 }, 00:18:06.068 { 00:18:06.068 "name": "BaseBdev2", 00:18:06.068 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:06.068 "is_configured": true, 00:18:06.068 "data_offset": 256, 00:18:06.068 "data_size": 7936 00:18:06.068 } 00:18:06.068 ] 00:18:06.068 }' 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.068 [2024-11-26 13:30:54.531291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.068 [2024-11-26 13:30:54.543093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.068 13:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:06.068 [2024-11-26 13:30:54.545436] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.006 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.006 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.006 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.006 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.006 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.006 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.006 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.006 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.006 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.265 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.265 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.265 "name": "raid_bdev1", 00:18:07.265 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:07.265 "strip_size_kb": 0, 00:18:07.265 "state": "online", 00:18:07.265 "raid_level": "raid1", 00:18:07.265 "superblock": true, 00:18:07.265 "num_base_bdevs": 2, 00:18:07.265 "num_base_bdevs_discovered": 2, 00:18:07.265 "num_base_bdevs_operational": 2, 00:18:07.265 "process": { 00:18:07.265 "type": "rebuild", 00:18:07.265 "target": "spare", 00:18:07.265 "progress": { 00:18:07.265 "blocks": 2560, 00:18:07.265 "percent": 32 00:18:07.265 } 00:18:07.265 }, 00:18:07.265 "base_bdevs_list": [ 00:18:07.265 { 00:18:07.265 "name": "spare", 00:18:07.265 "uuid": "50e726a2-4cc5-5131-9fd5-1103e0a1b1ea", 00:18:07.265 "is_configured": true, 00:18:07.265 "data_offset": 256, 00:18:07.265 "data_size": 7936 00:18:07.265 }, 00:18:07.265 { 00:18:07.265 "name": "BaseBdev2", 00:18:07.265 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:07.265 "is_configured": true, 00:18:07.265 "data_offset": 256, 00:18:07.265 "data_size": 7936 00:18:07.265 } 00:18:07.265 ] 00:18:07.265 }' 00:18:07.265 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.265 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.265 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:07.266 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=757 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.266 "name": "raid_bdev1", 00:18:07.266 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:07.266 "strip_size_kb": 0, 00:18:07.266 "state": "online", 00:18:07.266 "raid_level": "raid1", 00:18:07.266 "superblock": true, 00:18:07.266 "num_base_bdevs": 2, 00:18:07.266 "num_base_bdevs_discovered": 2, 00:18:07.266 "num_base_bdevs_operational": 2, 00:18:07.266 "process": { 00:18:07.266 "type": "rebuild", 00:18:07.266 "target": "spare", 00:18:07.266 "progress": { 00:18:07.266 "blocks": 2816, 00:18:07.266 "percent": 35 00:18:07.266 } 00:18:07.266 }, 00:18:07.266 "base_bdevs_list": [ 00:18:07.266 { 00:18:07.266 "name": "spare", 00:18:07.266 "uuid": "50e726a2-4cc5-5131-9fd5-1103e0a1b1ea", 00:18:07.266 "is_configured": true, 00:18:07.266 "data_offset": 256, 00:18:07.266 "data_size": 7936 00:18:07.266 }, 00:18:07.266 { 00:18:07.266 "name": "BaseBdev2", 00:18:07.266 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:07.266 "is_configured": true, 00:18:07.266 "data_offset": 256, 00:18:07.266 "data_size": 7936 00:18:07.266 } 00:18:07.266 ] 00:18:07.266 }' 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.266 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.525 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.525 13:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:08.462 13:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:08.462 13:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.462 13:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.462 13:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.462 13:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.462 13:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.462 13:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.462 13:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.462 13:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.462 13:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.462 13:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.462 13:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.462 "name": "raid_bdev1", 00:18:08.462 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:08.462 "strip_size_kb": 0, 00:18:08.462 "state": "online", 00:18:08.462 "raid_level": "raid1", 00:18:08.462 "superblock": true, 00:18:08.462 "num_base_bdevs": 2, 00:18:08.462 "num_base_bdevs_discovered": 2, 00:18:08.462 "num_base_bdevs_operational": 2, 00:18:08.462 "process": { 00:18:08.462 "type": "rebuild", 00:18:08.462 "target": "spare", 00:18:08.462 "progress": { 00:18:08.462 "blocks": 5888, 00:18:08.462 "percent": 74 00:18:08.462 } 00:18:08.462 }, 00:18:08.462 "base_bdevs_list": [ 00:18:08.462 { 00:18:08.462 "name": "spare", 00:18:08.462 "uuid": "50e726a2-4cc5-5131-9fd5-1103e0a1b1ea", 00:18:08.462 "is_configured": true, 00:18:08.462 "data_offset": 256, 00:18:08.462 "data_size": 7936 00:18:08.462 }, 00:18:08.462 { 00:18:08.462 "name": "BaseBdev2", 00:18:08.462 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:08.462 "is_configured": true, 00:18:08.462 "data_offset": 256, 00:18:08.462 "data_size": 7936 00:18:08.462 } 00:18:08.462 ] 00:18:08.462 }' 00:18:08.462 13:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.462 13:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.462 13:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.721 13:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.721 13:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:09.290 [2024-11-26 13:30:57.663372] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:09.290 [2024-11-26 13:30:57.663439] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:09.290 [2024-11-26 13:30:57.663552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.549 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:09.549 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.549 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.549 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.549 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.549 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.549 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.549 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.549 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.549 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.549 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.549 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.549 "name": "raid_bdev1", 00:18:09.549 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:09.549 "strip_size_kb": 0, 00:18:09.549 "state": "online", 00:18:09.549 "raid_level": "raid1", 00:18:09.549 "superblock": true, 00:18:09.549 "num_base_bdevs": 2, 00:18:09.549 "num_base_bdevs_discovered": 2, 00:18:09.549 "num_base_bdevs_operational": 2, 00:18:09.549 "base_bdevs_list": [ 00:18:09.549 { 00:18:09.549 "name": "spare", 00:18:09.549 "uuid": "50e726a2-4cc5-5131-9fd5-1103e0a1b1ea", 00:18:09.549 "is_configured": true, 00:18:09.549 "data_offset": 256, 00:18:09.549 "data_size": 7936 00:18:09.549 }, 00:18:09.549 { 00:18:09.549 "name": "BaseBdev2", 00:18:09.549 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:09.549 "is_configured": true, 00:18:09.549 "data_offset": 256, 00:18:09.549 "data_size": 7936 00:18:09.549 } 00:18:09.549 ] 00:18:09.549 }' 00:18:09.549 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.810 "name": "raid_bdev1", 00:18:09.810 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:09.810 "strip_size_kb": 0, 00:18:09.810 "state": "online", 00:18:09.810 "raid_level": "raid1", 00:18:09.810 "superblock": true, 00:18:09.810 "num_base_bdevs": 2, 00:18:09.810 "num_base_bdevs_discovered": 2, 00:18:09.810 "num_base_bdevs_operational": 2, 00:18:09.810 "base_bdevs_list": [ 00:18:09.810 { 00:18:09.810 "name": "spare", 00:18:09.810 "uuid": "50e726a2-4cc5-5131-9fd5-1103e0a1b1ea", 00:18:09.810 "is_configured": true, 00:18:09.810 "data_offset": 256, 00:18:09.810 "data_size": 7936 00:18:09.810 }, 00:18:09.810 { 00:18:09.810 "name": "BaseBdev2", 00:18:09.810 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:09.810 "is_configured": true, 00:18:09.810 "data_offset": 256, 00:18:09.810 "data_size": 7936 00:18:09.810 } 00:18:09.810 ] 00:18:09.810 }' 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.810 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.811 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.811 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.811 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.811 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.811 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.811 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.811 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.811 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.811 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.811 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.811 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.070 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.070 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.070 "name": "raid_bdev1", 00:18:10.070 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:10.070 "strip_size_kb": 0, 00:18:10.070 "state": "online", 00:18:10.070 "raid_level": "raid1", 00:18:10.070 "superblock": true, 00:18:10.070 "num_base_bdevs": 2, 00:18:10.070 "num_base_bdevs_discovered": 2, 00:18:10.070 "num_base_bdevs_operational": 2, 00:18:10.070 "base_bdevs_list": [ 00:18:10.070 { 00:18:10.070 "name": "spare", 00:18:10.070 "uuid": "50e726a2-4cc5-5131-9fd5-1103e0a1b1ea", 00:18:10.070 "is_configured": true, 00:18:10.070 "data_offset": 256, 00:18:10.070 "data_size": 7936 00:18:10.070 }, 00:18:10.070 { 00:18:10.070 "name": "BaseBdev2", 00:18:10.070 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:10.070 "is_configured": true, 00:18:10.070 "data_offset": 256, 00:18:10.070 "data_size": 7936 00:18:10.070 } 00:18:10.070 ] 00:18:10.070 }' 00:18:10.070 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.070 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.329 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:10.329 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.329 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.329 [2024-11-26 13:30:58.868084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:10.329 [2024-11-26 13:30:58.868310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:10.329 [2024-11-26 13:30:58.868514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.329 [2024-11-26 13:30:58.868753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.329 [2024-11-26 13:30:58.868901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:10.329 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.329 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.329 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.329 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:10.329 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.329 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.589 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:10.589 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:10.589 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:10.589 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:10.589 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.589 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.589 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.589 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:10.589 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.589 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.589 [2024-11-26 13:30:58.944101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:10.589 [2024-11-26 13:30:58.944153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.589 [2024-11-26 13:30:58.944183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:10.589 [2024-11-26 13:30:58.944195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.589 [2024-11-26 13:30:58.946413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.589 [2024-11-26 13:30:58.946648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:10.589 [2024-11-26 13:30:58.946748] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:10.589 [2024-11-26 13:30:58.946814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.589 [2024-11-26 13:30:58.946992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.589 spare 00:18:10.589 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.589 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:10.589 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.589 13:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.589 [2024-11-26 13:30:59.047091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:10.589 [2024-11-26 13:30:59.047119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:10.589 [2024-11-26 13:30:59.047208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:10.589 [2024-11-26 13:30:59.047309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:10.589 [2024-11-26 13:30:59.047325] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:10.589 [2024-11-26 13:30:59.047408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.589 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.589 "name": "raid_bdev1", 00:18:10.589 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:10.589 "strip_size_kb": 0, 00:18:10.589 "state": "online", 00:18:10.589 "raid_level": "raid1", 00:18:10.589 "superblock": true, 00:18:10.589 "num_base_bdevs": 2, 00:18:10.589 "num_base_bdevs_discovered": 2, 00:18:10.589 "num_base_bdevs_operational": 2, 00:18:10.589 "base_bdevs_list": [ 00:18:10.589 { 00:18:10.589 "name": "spare", 00:18:10.589 "uuid": "50e726a2-4cc5-5131-9fd5-1103e0a1b1ea", 00:18:10.589 "is_configured": true, 00:18:10.589 "data_offset": 256, 00:18:10.589 "data_size": 7936 00:18:10.589 }, 00:18:10.589 { 00:18:10.589 "name": "BaseBdev2", 00:18:10.590 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:10.590 "is_configured": true, 00:18:10.590 "data_offset": 256, 00:18:10.590 "data_size": 7936 00:18:10.590 } 00:18:10.590 ] 00:18:10.590 }' 00:18:10.590 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.590 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.157 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.157 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.157 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.157 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.157 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.157 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.157 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.157 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.157 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.157 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.157 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.157 "name": "raid_bdev1", 00:18:11.157 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:11.157 "strip_size_kb": 0, 00:18:11.157 "state": "online", 00:18:11.157 "raid_level": "raid1", 00:18:11.157 "superblock": true, 00:18:11.157 "num_base_bdevs": 2, 00:18:11.157 "num_base_bdevs_discovered": 2, 00:18:11.157 "num_base_bdevs_operational": 2, 00:18:11.157 "base_bdevs_list": [ 00:18:11.157 { 00:18:11.157 "name": "spare", 00:18:11.157 "uuid": "50e726a2-4cc5-5131-9fd5-1103e0a1b1ea", 00:18:11.157 "is_configured": true, 00:18:11.157 "data_offset": 256, 00:18:11.157 "data_size": 7936 00:18:11.157 }, 00:18:11.157 { 00:18:11.157 "name": "BaseBdev2", 00:18:11.157 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:11.157 "is_configured": true, 00:18:11.157 "data_offset": 256, 00:18:11.157 "data_size": 7936 00:18:11.157 } 00:18:11.157 ] 00:18:11.157 }' 00:18:11.157 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.157 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:11.157 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.157 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.417 [2024-11-26 13:30:59.780516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.417 "name": "raid_bdev1", 00:18:11.417 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:11.417 "strip_size_kb": 0, 00:18:11.417 "state": "online", 00:18:11.417 "raid_level": "raid1", 00:18:11.417 "superblock": true, 00:18:11.417 "num_base_bdevs": 2, 00:18:11.417 "num_base_bdevs_discovered": 1, 00:18:11.417 "num_base_bdevs_operational": 1, 00:18:11.417 "base_bdevs_list": [ 00:18:11.417 { 00:18:11.417 "name": null, 00:18:11.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.417 "is_configured": false, 00:18:11.417 "data_offset": 0, 00:18:11.417 "data_size": 7936 00:18:11.417 }, 00:18:11.417 { 00:18:11.417 "name": "BaseBdev2", 00:18:11.417 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:11.417 "is_configured": true, 00:18:11.417 "data_offset": 256, 00:18:11.417 "data_size": 7936 00:18:11.417 } 00:18:11.417 ] 00:18:11.417 }' 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.417 13:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.985 13:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:11.985 13:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.985 13:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.985 [2024-11-26 13:31:00.296674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:11.985 [2024-11-26 13:31:00.296804] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:11.985 [2024-11-26 13:31:00.296826] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:11.985 [2024-11-26 13:31:00.296866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:11.985 [2024-11-26 13:31:00.310035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:11.985 13:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.985 13:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:11.985 [2024-11-26 13:31:00.312402] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.922 "name": "raid_bdev1", 00:18:12.922 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:12.922 "strip_size_kb": 0, 00:18:12.922 "state": "online", 00:18:12.922 "raid_level": "raid1", 00:18:12.922 "superblock": true, 00:18:12.922 "num_base_bdevs": 2, 00:18:12.922 "num_base_bdevs_discovered": 2, 00:18:12.922 "num_base_bdevs_operational": 2, 00:18:12.922 "process": { 00:18:12.922 "type": "rebuild", 00:18:12.922 "target": "spare", 00:18:12.922 "progress": { 00:18:12.922 "blocks": 2560, 00:18:12.922 "percent": 32 00:18:12.922 } 00:18:12.922 }, 00:18:12.922 "base_bdevs_list": [ 00:18:12.922 { 00:18:12.922 "name": "spare", 00:18:12.922 "uuid": "50e726a2-4cc5-5131-9fd5-1103e0a1b1ea", 00:18:12.922 "is_configured": true, 00:18:12.922 "data_offset": 256, 00:18:12.922 "data_size": 7936 00:18:12.922 }, 00:18:12.922 { 00:18:12.922 "name": "BaseBdev2", 00:18:12.922 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:12.922 "is_configured": true, 00:18:12.922 "data_offset": 256, 00:18:12.922 "data_size": 7936 00:18:12.922 } 00:18:12.922 ] 00:18:12.922 }' 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.922 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.182 [2024-11-26 13:31:01.485975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.182 [2024-11-26 13:31:01.519740] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:13.182 [2024-11-26 13:31:01.519826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.182 [2024-11-26 13:31:01.519848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.182 [2024-11-26 13:31:01.519877] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.182 "name": "raid_bdev1", 00:18:13.182 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:13.182 "strip_size_kb": 0, 00:18:13.182 "state": "online", 00:18:13.182 "raid_level": "raid1", 00:18:13.182 "superblock": true, 00:18:13.182 "num_base_bdevs": 2, 00:18:13.182 "num_base_bdevs_discovered": 1, 00:18:13.182 "num_base_bdevs_operational": 1, 00:18:13.182 "base_bdevs_list": [ 00:18:13.182 { 00:18:13.182 "name": null, 00:18:13.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.182 "is_configured": false, 00:18:13.182 "data_offset": 0, 00:18:13.182 "data_size": 7936 00:18:13.182 }, 00:18:13.182 { 00:18:13.182 "name": "BaseBdev2", 00:18:13.182 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:13.182 "is_configured": true, 00:18:13.182 "data_offset": 256, 00:18:13.182 "data_size": 7936 00:18:13.182 } 00:18:13.182 ] 00:18:13.182 }' 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.182 13:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.750 13:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:13.750 13:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.750 13:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.750 [2024-11-26 13:31:02.077170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:13.750 [2024-11-26 13:31:02.077449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.750 [2024-11-26 13:31:02.077486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:13.750 [2024-11-26 13:31:02.077505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.750 [2024-11-26 13:31:02.077748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.750 [2024-11-26 13:31:02.077777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:13.750 [2024-11-26 13:31:02.077836] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:13.750 [2024-11-26 13:31:02.077886] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:13.750 [2024-11-26 13:31:02.077896] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:13.750 [2024-11-26 13:31:02.077929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:13.750 [2024-11-26 13:31:02.089085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:13.750 spare 00:18:13.750 13:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.750 13:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:13.750 [2024-11-26 13:31:02.091609] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:14.687 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.687 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.687 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.687 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.687 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.687 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.687 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.687 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.687 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.687 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.687 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.687 "name": "raid_bdev1", 00:18:14.687 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:14.687 "strip_size_kb": 0, 00:18:14.687 "state": "online", 00:18:14.687 "raid_level": "raid1", 00:18:14.687 "superblock": true, 00:18:14.687 "num_base_bdevs": 2, 00:18:14.687 "num_base_bdevs_discovered": 2, 00:18:14.687 "num_base_bdevs_operational": 2, 00:18:14.687 "process": { 00:18:14.687 "type": "rebuild", 00:18:14.687 "target": "spare", 00:18:14.687 "progress": { 00:18:14.687 "blocks": 2560, 00:18:14.687 "percent": 32 00:18:14.687 } 00:18:14.687 }, 00:18:14.687 "base_bdevs_list": [ 00:18:14.687 { 00:18:14.687 "name": "spare", 00:18:14.687 "uuid": "50e726a2-4cc5-5131-9fd5-1103e0a1b1ea", 00:18:14.687 "is_configured": true, 00:18:14.687 "data_offset": 256, 00:18:14.687 "data_size": 7936 00:18:14.687 }, 00:18:14.687 { 00:18:14.687 "name": "BaseBdev2", 00:18:14.687 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:14.687 "is_configured": true, 00:18:14.687 "data_offset": 256, 00:18:14.687 "data_size": 7936 00:18:14.687 } 00:18:14.687 ] 00:18:14.687 }' 00:18:14.687 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.687 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.687 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.946 [2024-11-26 13:31:03.269347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.946 [2024-11-26 13:31:03.298323] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:14.946 [2024-11-26 13:31:03.298383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.946 [2024-11-26 13:31:03.298406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.946 [2024-11-26 13:31:03.298416] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.946 "name": "raid_bdev1", 00:18:14.946 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:14.946 "strip_size_kb": 0, 00:18:14.946 "state": "online", 00:18:14.946 "raid_level": "raid1", 00:18:14.946 "superblock": true, 00:18:14.946 "num_base_bdevs": 2, 00:18:14.946 "num_base_bdevs_discovered": 1, 00:18:14.946 "num_base_bdevs_operational": 1, 00:18:14.946 "base_bdevs_list": [ 00:18:14.946 { 00:18:14.946 "name": null, 00:18:14.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.946 "is_configured": false, 00:18:14.946 "data_offset": 0, 00:18:14.946 "data_size": 7936 00:18:14.946 }, 00:18:14.946 { 00:18:14.946 "name": "BaseBdev2", 00:18:14.946 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:14.946 "is_configured": true, 00:18:14.946 "data_offset": 256, 00:18:14.946 "data_size": 7936 00:18:14.946 } 00:18:14.946 ] 00:18:14.946 }' 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.946 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.514 "name": "raid_bdev1", 00:18:15.514 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:15.514 "strip_size_kb": 0, 00:18:15.514 "state": "online", 00:18:15.514 "raid_level": "raid1", 00:18:15.514 "superblock": true, 00:18:15.514 "num_base_bdevs": 2, 00:18:15.514 "num_base_bdevs_discovered": 1, 00:18:15.514 "num_base_bdevs_operational": 1, 00:18:15.514 "base_bdevs_list": [ 00:18:15.514 { 00:18:15.514 "name": null, 00:18:15.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.514 "is_configured": false, 00:18:15.514 "data_offset": 0, 00:18:15.514 "data_size": 7936 00:18:15.514 }, 00:18:15.514 { 00:18:15.514 "name": "BaseBdev2", 00:18:15.514 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:15.514 "is_configured": true, 00:18:15.514 "data_offset": 256, 00:18:15.514 "data_size": 7936 00:18:15.514 } 00:18:15.514 ] 00:18:15.514 }' 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.514 [2024-11-26 13:31:03.959090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:15.514 [2024-11-26 13:31:03.959153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.514 [2024-11-26 13:31:03.959223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:15.514 [2024-11-26 13:31:03.959248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.514 [2024-11-26 13:31:03.959443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.514 [2024-11-26 13:31:03.959465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:15.514 [2024-11-26 13:31:03.959557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:15.514 [2024-11-26 13:31:03.959606] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:15.514 [2024-11-26 13:31:03.959635] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:15.514 [2024-11-26 13:31:03.959646] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:15.514 BaseBdev1 00:18:15.514 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.515 13:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:16.452 13:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.452 13:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.452 13:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.452 13:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.452 13:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.452 13:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.452 13:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.452 13:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.452 13:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.452 13:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.452 13:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.452 13:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.452 13:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.452 13:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.452 13:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.710 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.710 "name": "raid_bdev1", 00:18:16.710 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:16.710 "strip_size_kb": 0, 00:18:16.710 "state": "online", 00:18:16.710 "raid_level": "raid1", 00:18:16.710 "superblock": true, 00:18:16.710 "num_base_bdevs": 2, 00:18:16.710 "num_base_bdevs_discovered": 1, 00:18:16.710 "num_base_bdevs_operational": 1, 00:18:16.710 "base_bdevs_list": [ 00:18:16.710 { 00:18:16.710 "name": null, 00:18:16.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.710 "is_configured": false, 00:18:16.710 "data_offset": 0, 00:18:16.710 "data_size": 7936 00:18:16.710 }, 00:18:16.710 { 00:18:16.710 "name": "BaseBdev2", 00:18:16.710 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:16.710 "is_configured": true, 00:18:16.710 "data_offset": 256, 00:18:16.710 "data_size": 7936 00:18:16.710 } 00:18:16.710 ] 00:18:16.710 }' 00:18:16.710 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.710 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.968 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.968 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.968 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.968 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.968 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.968 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.968 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.968 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.968 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.968 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.968 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.968 "name": "raid_bdev1", 00:18:16.968 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:16.968 "strip_size_kb": 0, 00:18:16.968 "state": "online", 00:18:16.968 "raid_level": "raid1", 00:18:16.968 "superblock": true, 00:18:16.968 "num_base_bdevs": 2, 00:18:16.968 "num_base_bdevs_discovered": 1, 00:18:16.968 "num_base_bdevs_operational": 1, 00:18:16.968 "base_bdevs_list": [ 00:18:16.968 { 00:18:16.968 "name": null, 00:18:16.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.968 "is_configured": false, 00:18:16.968 "data_offset": 0, 00:18:16.968 "data_size": 7936 00:18:16.968 }, 00:18:16.968 { 00:18:16.968 "name": "BaseBdev2", 00:18:16.968 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:16.968 "is_configured": true, 00:18:16.968 "data_offset": 256, 00:18:16.968 "data_size": 7936 00:18:16.968 } 00:18:16.968 ] 00:18:16.968 }' 00:18:17.226 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.226 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:17.226 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.226 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.226 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:17.227 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:17.227 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:17.227 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:17.227 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.227 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:17.227 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.227 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:17.227 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.227 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.227 [2024-11-26 13:31:05.639535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:17.227 [2024-11-26 13:31:05.639700] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:17.227 [2024-11-26 13:31:05.639724] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:17.227 request: 00:18:17.227 { 00:18:17.227 "base_bdev": "BaseBdev1", 00:18:17.227 "raid_bdev": "raid_bdev1", 00:18:17.227 "method": "bdev_raid_add_base_bdev", 00:18:17.227 "req_id": 1 00:18:17.227 } 00:18:17.227 Got JSON-RPC error response 00:18:17.227 response: 00:18:17.227 { 00:18:17.227 "code": -22, 00:18:17.227 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:17.227 } 00:18:17.227 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:17.227 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:17.227 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:17.227 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:17.227 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:17.227 13:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.163 "name": "raid_bdev1", 00:18:18.163 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:18.163 "strip_size_kb": 0, 00:18:18.163 "state": "online", 00:18:18.163 "raid_level": "raid1", 00:18:18.163 "superblock": true, 00:18:18.163 "num_base_bdevs": 2, 00:18:18.163 "num_base_bdevs_discovered": 1, 00:18:18.163 "num_base_bdevs_operational": 1, 00:18:18.163 "base_bdevs_list": [ 00:18:18.163 { 00:18:18.163 "name": null, 00:18:18.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.163 "is_configured": false, 00:18:18.163 "data_offset": 0, 00:18:18.163 "data_size": 7936 00:18:18.163 }, 00:18:18.163 { 00:18:18.163 "name": "BaseBdev2", 00:18:18.163 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:18.163 "is_configured": true, 00:18:18.163 "data_offset": 256, 00:18:18.163 "data_size": 7936 00:18:18.163 } 00:18:18.163 ] 00:18:18.163 }' 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.163 13:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.732 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:18.732 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.732 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:18.732 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:18.732 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.732 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.732 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.732 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.732 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.732 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.732 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.732 "name": "raid_bdev1", 00:18:18.732 "uuid": "61b3fefa-00a2-4a78-8438-f9edf4ea60f1", 00:18:18.732 "strip_size_kb": 0, 00:18:18.732 "state": "online", 00:18:18.732 "raid_level": "raid1", 00:18:18.732 "superblock": true, 00:18:18.732 "num_base_bdevs": 2, 00:18:18.732 "num_base_bdevs_discovered": 1, 00:18:18.732 "num_base_bdevs_operational": 1, 00:18:18.732 "base_bdevs_list": [ 00:18:18.732 { 00:18:18.732 "name": null, 00:18:18.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.732 "is_configured": false, 00:18:18.732 "data_offset": 0, 00:18:18.732 "data_size": 7936 00:18:18.732 }, 00:18:18.732 { 00:18:18.732 "name": "BaseBdev2", 00:18:18.732 "uuid": "5c511872-4e09-5a1e-bcf1-2137abe3cd5e", 00:18:18.732 "is_configured": true, 00:18:18.732 "data_offset": 256, 00:18:18.732 "data_size": 7936 00:18:18.732 } 00:18:18.732 ] 00:18:18.732 }' 00:18:18.732 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.732 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:18.732 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.991 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:18.991 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88704 00:18:18.992 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88704 ']' 00:18:18.992 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88704 00:18:18.992 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:18.992 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.992 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88704 00:18:18.992 killing process with pid 88704 00:18:18.992 Received shutdown signal, test time was about 60.000000 seconds 00:18:18.992 00:18:18.992 Latency(us) 00:18:18.992 [2024-11-26T13:31:07.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.992 [2024-11-26T13:31:07.562Z] =================================================================================================================== 00:18:18.992 [2024-11-26T13:31:07.562Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.992 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:18.992 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:18.992 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88704' 00:18:18.992 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88704 00:18:18.992 [2024-11-26 13:31:07.363154] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:18.992 13:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88704 00:18:18.992 [2024-11-26 13:31:07.363346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.992 [2024-11-26 13:31:07.363399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.992 [2024-11-26 13:31:07.363416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:19.250 [2024-11-26 13:31:07.576858] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:20.187 13:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:20.187 00:18:20.187 real 0m18.141s 00:18:20.187 user 0m24.958s 00:18:20.187 sys 0m1.369s 00:18:20.187 ************************************ 00:18:20.187 END TEST raid_rebuild_test_sb_md_interleaved 00:18:20.187 ************************************ 00:18:20.187 13:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.187 13:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.187 13:31:08 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:20.187 13:31:08 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:20.187 13:31:08 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88704 ']' 00:18:20.187 13:31:08 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88704 00:18:20.187 13:31:08 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:20.187 ************************************ 00:18:20.187 END TEST bdev_raid 00:18:20.187 ************************************ 00:18:20.187 00:18:20.187 real 12m20.084s 00:18:20.187 user 17m37.824s 00:18:20.187 sys 1m41.251s 00:18:20.187 13:31:08 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.187 13:31:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:20.187 13:31:08 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:20.187 13:31:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:20.187 13:31:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.187 13:31:08 -- common/autotest_common.sh@10 -- # set +x 00:18:20.187 ************************************ 00:18:20.187 START TEST spdkcli_raid 00:18:20.187 ************************************ 00:18:20.187 13:31:08 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:20.187 * Looking for test storage... 00:18:20.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:20.187 13:31:08 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:20.187 13:31:08 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:18:20.187 13:31:08 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:20.187 13:31:08 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:20.187 13:31:08 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.188 13:31:08 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:20.188 13:31:08 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:20.188 13:31:08 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:20.188 13:31:08 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:20.188 13:31:08 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:20.188 13:31:08 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.188 13:31:08 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:20.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.188 --rc genhtml_branch_coverage=1 00:18:20.188 --rc genhtml_function_coverage=1 00:18:20.188 --rc genhtml_legend=1 00:18:20.188 --rc geninfo_all_blocks=1 00:18:20.188 --rc geninfo_unexecuted_blocks=1 00:18:20.188 00:18:20.188 ' 00:18:20.188 13:31:08 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:20.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.188 --rc genhtml_branch_coverage=1 00:18:20.188 --rc genhtml_function_coverage=1 00:18:20.188 --rc genhtml_legend=1 00:18:20.188 --rc geninfo_all_blocks=1 00:18:20.188 --rc geninfo_unexecuted_blocks=1 00:18:20.188 00:18:20.188 ' 00:18:20.188 13:31:08 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:20.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.188 --rc genhtml_branch_coverage=1 00:18:20.188 --rc genhtml_function_coverage=1 00:18:20.188 --rc genhtml_legend=1 00:18:20.188 --rc geninfo_all_blocks=1 00:18:20.188 --rc geninfo_unexecuted_blocks=1 00:18:20.188 00:18:20.188 ' 00:18:20.188 13:31:08 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:20.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.188 --rc genhtml_branch_coverage=1 00:18:20.188 --rc genhtml_function_coverage=1 00:18:20.188 --rc genhtml_legend=1 00:18:20.188 --rc geninfo_all_blocks=1 00:18:20.188 --rc geninfo_unexecuted_blocks=1 00:18:20.188 00:18:20.188 ' 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:20.188 13:31:08 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:20.188 13:31:08 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:20.188 13:31:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:20.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89386 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89386 00:18:20.188 13:31:08 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89386 ']' 00:18:20.188 13:31:08 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:20.188 13:31:08 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.188 13:31:08 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.188 13:31:08 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.188 13:31:08 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.188 13:31:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:20.447 [2024-11-26 13:31:08.867759] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:18:20.447 [2024-11-26 13:31:08.867945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89386 ] 00:18:20.706 [2024-11-26 13:31:09.057102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:20.706 [2024-11-26 13:31:09.206077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.706 [2024-11-26 13:31:09.206094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.642 13:31:09 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.642 13:31:09 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:21.642 13:31:09 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:21.642 13:31:09 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:21.642 13:31:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:21.642 13:31:09 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:21.642 13:31:09 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:21.642 13:31:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:21.642 13:31:09 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:21.642 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:21.642 ' 00:18:23.016 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:23.016 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:23.275 13:31:11 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:23.275 13:31:11 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:23.275 13:31:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:23.275 13:31:11 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:23.275 13:31:11 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:23.275 13:31:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:23.275 13:31:11 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:23.275 ' 00:18:24.212 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:24.471 13:31:12 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:24.471 13:31:12 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:24.471 13:31:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.471 13:31:12 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:24.471 13:31:12 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:24.471 13:31:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.471 13:31:12 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:24.471 13:31:12 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:25.040 13:31:13 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:25.040 13:31:13 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:25.040 13:31:13 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:25.040 13:31:13 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:25.040 13:31:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.040 13:31:13 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:25.040 13:31:13 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.040 13:31:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.040 13:31:13 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:25.040 ' 00:18:26.419 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:26.419 13:31:14 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:26.419 13:31:14 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:26.419 13:31:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.419 13:31:14 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:26.419 13:31:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:26.419 13:31:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.419 13:31:14 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:26.419 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:26.419 ' 00:18:27.798 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:27.798 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:27.798 13:31:16 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:27.798 13:31:16 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:27.798 13:31:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.798 13:31:16 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89386 00:18:27.798 13:31:16 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89386 ']' 00:18:27.798 13:31:16 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89386 00:18:27.798 13:31:16 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:27.798 13:31:16 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.798 13:31:16 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89386 00:18:27.798 killing process with pid 89386 00:18:27.798 13:31:16 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:27.798 13:31:16 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:27.798 13:31:16 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89386' 00:18:27.798 13:31:16 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89386 00:18:27.798 13:31:16 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89386 00:18:29.703 13:31:18 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:29.703 13:31:18 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89386 ']' 00:18:29.703 13:31:18 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89386 00:18:29.703 13:31:18 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89386 ']' 00:18:29.703 13:31:18 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89386 00:18:29.703 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89386) - No such process 00:18:29.703 Process with pid 89386 is not found 00:18:29.703 13:31:18 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89386 is not found' 00:18:29.703 13:31:18 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:29.703 13:31:18 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:29.703 13:31:18 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:29.703 13:31:18 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:29.703 ************************************ 00:18:29.703 END TEST spdkcli_raid 00:18:29.703 ************************************ 00:18:29.703 00:18:29.703 real 0m9.563s 00:18:29.703 user 0m19.808s 00:18:29.703 sys 0m1.131s 00:18:29.703 13:31:18 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:29.703 13:31:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.703 13:31:18 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:29.703 13:31:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:29.703 13:31:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:29.703 13:31:18 -- common/autotest_common.sh@10 -- # set +x 00:18:29.703 ************************************ 00:18:29.703 START TEST blockdev_raid5f 00:18:29.703 ************************************ 00:18:29.703 13:31:18 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:29.703 * Looking for test storage... 00:18:29.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:29.703 13:31:18 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:29.703 13:31:18 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:18:29.703 13:31:18 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:29.963 13:31:18 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:29.963 13:31:18 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:29.963 13:31:18 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:29.963 13:31:18 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:29.963 13:31:18 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:29.963 13:31:18 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:29.963 13:31:18 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:29.963 13:31:18 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:29.963 13:31:18 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:29.963 13:31:18 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:29.963 13:31:18 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:29.963 13:31:18 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:29.963 13:31:18 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:29.963 13:31:18 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:29.963 13:31:18 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:29.964 13:31:18 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:29.964 13:31:18 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:29.964 13:31:18 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:29.964 13:31:18 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:29.964 13:31:18 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:29.964 13:31:18 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:29.964 13:31:18 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:29.964 13:31:18 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:29.964 13:31:18 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:29.964 13:31:18 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:29.964 13:31:18 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:29.964 13:31:18 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:29.964 13:31:18 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:29.964 13:31:18 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:29.964 13:31:18 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:29.964 13:31:18 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:29.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.964 --rc genhtml_branch_coverage=1 00:18:29.964 --rc genhtml_function_coverage=1 00:18:29.964 --rc genhtml_legend=1 00:18:29.964 --rc geninfo_all_blocks=1 00:18:29.964 --rc geninfo_unexecuted_blocks=1 00:18:29.964 00:18:29.964 ' 00:18:29.964 13:31:18 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:29.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.964 --rc genhtml_branch_coverage=1 00:18:29.964 --rc genhtml_function_coverage=1 00:18:29.964 --rc genhtml_legend=1 00:18:29.964 --rc geninfo_all_blocks=1 00:18:29.964 --rc geninfo_unexecuted_blocks=1 00:18:29.964 00:18:29.964 ' 00:18:29.964 13:31:18 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:29.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.964 --rc genhtml_branch_coverage=1 00:18:29.964 --rc genhtml_function_coverage=1 00:18:29.964 --rc genhtml_legend=1 00:18:29.964 --rc geninfo_all_blocks=1 00:18:29.964 --rc geninfo_unexecuted_blocks=1 00:18:29.964 00:18:29.964 ' 00:18:29.964 13:31:18 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:29.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.964 --rc genhtml_branch_coverage=1 00:18:29.964 --rc genhtml_function_coverage=1 00:18:29.964 --rc genhtml_legend=1 00:18:29.964 --rc geninfo_all_blocks=1 00:18:29.964 --rc geninfo_unexecuted_blocks=1 00:18:29.964 00:18:29.964 ' 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89655 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89655 00:18:29.964 13:31:18 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89655 ']' 00:18:29.964 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:29.964 13:31:18 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.964 13:31:18 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.964 13:31:18 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.964 13:31:18 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.964 13:31:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:29.964 [2024-11-26 13:31:18.436042] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:18:29.964 [2024-11-26 13:31:18.436189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89655 ] 00:18:30.223 [2024-11-26 13:31:18.599900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.223 [2024-11-26 13:31:18.696014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.159 13:31:19 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.159 13:31:19 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:31.159 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:31.159 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:18:31.159 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:31.159 13:31:19 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:31.160 Malloc0 00:18:31.160 Malloc1 00:18:31.160 Malloc2 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.160 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.160 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:18:31.160 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.160 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.160 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.160 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:31.160 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:31.160 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.160 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:31.160 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "850b0867-b1f1-47ac-b00d-a38396d8c5da"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "850b0867-b1f1-47ac-b00d-a38396d8c5da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "850b0867-b1f1-47ac-b00d-a38396d8c5da",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "ed02a436-a620-49ea-8bcc-756db42fb8c0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "447693ae-e890-4360-ac04-cc6dd9675ce3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e446349a-8cef-44c5-8856-035600fbb438",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:31.160 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:31.160 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:31.160 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:18:31.160 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:31.160 13:31:19 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89655 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89655 ']' 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89655 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.160 13:31:19 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89655 00:18:31.419 13:31:19 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:31.419 13:31:19 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:31.419 killing process with pid 89655 00:18:31.419 13:31:19 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89655' 00:18:31.419 13:31:19 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89655 00:18:31.419 13:31:19 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89655 00:18:33.420 13:31:21 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:33.420 13:31:21 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:33.420 13:31:21 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:33.420 13:31:21 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.420 13:31:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:33.420 ************************************ 00:18:33.420 START TEST bdev_hello_world 00:18:33.420 ************************************ 00:18:33.420 13:31:21 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:33.420 [2024-11-26 13:31:21.802980] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:18:33.420 [2024-11-26 13:31:21.803161] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89710 ] 00:18:33.420 [2024-11-26 13:31:21.977585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.679 [2024-11-26 13:31:22.078140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.247 [2024-11-26 13:31:22.523999] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:34.247 [2024-11-26 13:31:22.524065] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:34.247 [2024-11-26 13:31:22.524100] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:34.247 [2024-11-26 13:31:22.524687] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:34.247 [2024-11-26 13:31:22.524885] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:34.247 [2024-11-26 13:31:22.524928] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:34.247 [2024-11-26 13:31:22.524993] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:34.247 00:18:34.247 [2024-11-26 13:31:22.525020] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:35.181 00:18:35.181 real 0m1.895s 00:18:35.181 user 0m1.484s 00:18:35.181 sys 0m0.288s 00:18:35.181 13:31:23 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.181 13:31:23 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:35.181 ************************************ 00:18:35.181 END TEST bdev_hello_world 00:18:35.181 ************************************ 00:18:35.181 13:31:23 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:35.181 13:31:23 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:35.181 13:31:23 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.181 13:31:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.181 ************************************ 00:18:35.181 START TEST bdev_bounds 00:18:35.181 ************************************ 00:18:35.181 13:31:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:35.181 13:31:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89748 00:18:35.181 13:31:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:35.181 13:31:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:35.181 Process bdevio pid: 89748 00:18:35.181 13:31:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89748' 00:18:35.181 13:31:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89748 00:18:35.181 13:31:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89748 ']' 00:18:35.181 13:31:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.181 13:31:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.181 13:31:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.181 13:31:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.181 13:31:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:35.181 [2024-11-26 13:31:23.739354] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:18:35.181 [2024-11-26 13:31:23.739545] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89748 ] 00:18:35.439 [2024-11-26 13:31:23.916893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:35.698 [2024-11-26 13:31:24.075548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.698 [2024-11-26 13:31:24.075691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.698 [2024-11-26 13:31:24.075702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.265 13:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.265 13:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:36.265 13:31:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:36.265 I/O targets: 00:18:36.265 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:36.265 00:18:36.265 00:18:36.265 CUnit - A unit testing framework for C - Version 2.1-3 00:18:36.265 http://cunit.sourceforge.net/ 00:18:36.265 00:18:36.265 00:18:36.265 Suite: bdevio tests on: raid5f 00:18:36.265 Test: blockdev write read block ...passed 00:18:36.265 Test: blockdev write zeroes read block ...passed 00:18:36.523 Test: blockdev write zeroes read no split ...passed 00:18:36.523 Test: blockdev write zeroes read split ...passed 00:18:36.523 Test: blockdev write zeroes read split partial ...passed 00:18:36.523 Test: blockdev reset ...passed 00:18:36.523 Test: blockdev write read 8 blocks ...passed 00:18:36.523 Test: blockdev write read size > 128k ...passed 00:18:36.523 Test: blockdev write read invalid size ...passed 00:18:36.523 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:36.523 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:36.523 Test: blockdev write read max offset ...passed 00:18:36.523 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:36.523 Test: blockdev writev readv 8 blocks ...passed 00:18:36.523 Test: blockdev writev readv 30 x 1block ...passed 00:18:36.523 Test: blockdev writev readv block ...passed 00:18:36.523 Test: blockdev writev readv size > 128k ...passed 00:18:36.523 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:36.523 Test: blockdev comparev and writev ...passed 00:18:36.523 Test: blockdev nvme passthru rw ...passed 00:18:36.523 Test: blockdev nvme passthru vendor specific ...passed 00:18:36.523 Test: blockdev nvme admin passthru ...passed 00:18:36.523 Test: blockdev copy ...passed 00:18:36.523 00:18:36.523 Run Summary: Type Total Ran Passed Failed Inactive 00:18:36.523 suites 1 1 n/a 0 0 00:18:36.523 tests 23 23 23 0 0 00:18:36.523 asserts 130 130 130 0 n/a 00:18:36.523 00:18:36.523 Elapsed time = 0.489 seconds 00:18:36.523 0 00:18:36.523 13:31:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89748 00:18:36.523 13:31:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89748 ']' 00:18:36.523 13:31:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89748 00:18:36.523 13:31:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:36.523 13:31:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.523 13:31:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89748 00:18:36.523 13:31:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.523 13:31:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.524 killing process with pid 89748 00:18:36.524 13:31:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89748' 00:18:36.524 13:31:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89748 00:18:36.524 13:31:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89748 00:18:37.902 13:31:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:37.902 00:18:37.902 real 0m2.560s 00:18:37.902 user 0m6.287s 00:18:37.902 sys 0m0.432s 00:18:37.902 13:31:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.902 ************************************ 00:18:37.902 END TEST bdev_bounds 00:18:37.902 13:31:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:37.902 ************************************ 00:18:37.902 13:31:26 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:37.902 13:31:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:37.902 13:31:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.902 13:31:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:37.902 ************************************ 00:18:37.902 START TEST bdev_nbd 00:18:37.902 ************************************ 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89808 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89808 /var/tmp/spdk-nbd.sock 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 89808 ']' 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.902 13:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:37.902 [2024-11-26 13:31:26.385307] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:18:37.902 [2024-11-26 13:31:26.385527] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.161 [2024-11-26 13:31:26.569988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.161 [2024-11-26 13:31:26.679902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.729 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.729 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:38.729 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:38.729 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:38.729 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:38.729 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:38.729 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:38.729 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:38.729 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:38.730 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:38.730 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:38.730 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:38.730 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:38.730 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:38.730 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:38.988 1+0 records in 00:18:38.988 1+0 records out 00:18:38.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252926 s, 16.2 MB/s 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:38.988 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:38.989 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:39.247 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:39.247 { 00:18:39.247 "nbd_device": "/dev/nbd0", 00:18:39.247 "bdev_name": "raid5f" 00:18:39.247 } 00:18:39.247 ]' 00:18:39.247 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:39.247 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:39.247 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:39.247 { 00:18:39.247 "nbd_device": "/dev/nbd0", 00:18:39.247 "bdev_name": "raid5f" 00:18:39.247 } 00:18:39.247 ]' 00:18:39.247 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:39.247 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:39.247 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:39.247 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:39.247 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:39.247 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.247 13:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:39.815 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:40.074 /dev/nbd0 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:40.334 1+0 records in 00:18:40.334 1+0 records out 00:18:40.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306566 s, 13.4 MB/s 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:40.334 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:40.593 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:40.593 { 00:18:40.593 "nbd_device": "/dev/nbd0", 00:18:40.593 "bdev_name": "raid5f" 00:18:40.593 } 00:18:40.593 ]' 00:18:40.593 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:40.593 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:40.593 { 00:18:40.593 "nbd_device": "/dev/nbd0", 00:18:40.593 "bdev_name": "raid5f" 00:18:40.593 } 00:18:40.593 ]' 00:18:40.593 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:40.593 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:40.593 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:40.593 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:40.593 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:40.593 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:40.593 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:40.593 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:40.593 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:40.593 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:40.593 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:40.593 13:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:40.593 256+0 records in 00:18:40.593 256+0 records out 00:18:40.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00897694 s, 117 MB/s 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:40.593 256+0 records in 00:18:40.593 256+0 records out 00:18:40.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0344377 s, 30.4 MB/s 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:40.593 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:40.850 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:40.850 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:40.850 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:40.850 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:40.850 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:40.850 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:40.850 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:40.850 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:40.850 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:40.850 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:40.850 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:41.108 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:41.367 malloc_lvol_verify 00:18:41.367 13:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:41.626 070e1c22-4634-48d5-802d-4921c16351ae 00:18:41.626 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:41.885 3fa2adb4-da78-4332-adf5-1c42107e7a07 00:18:41.885 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:42.143 /dev/nbd0 00:18:42.143 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:42.143 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:42.143 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:42.144 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:42.144 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:42.144 mke2fs 1.47.0 (5-Feb-2023) 00:18:42.144 Discarding device blocks: 0/4096 done 00:18:42.144 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:42.144 00:18:42.144 Allocating group tables: 0/1 done 00:18:42.144 Writing inode tables: 0/1 done 00:18:42.144 Creating journal (1024 blocks): done 00:18:42.144 Writing superblocks and filesystem accounting information: 0/1 done 00:18:42.144 00:18:42.144 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:42.144 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:42.144 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:42.144 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:42.144 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:42.144 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:42.144 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89808 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 89808 ']' 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 89808 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89808 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.403 killing process with pid 89808 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89808' 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 89808 00:18:42.403 13:31:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 89808 00:18:43.782 13:31:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:43.782 00:18:43.782 real 0m5.740s 00:18:43.782 user 0m8.153s 00:18:43.782 sys 0m1.302s 00:18:43.782 13:31:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.782 13:31:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:43.782 ************************************ 00:18:43.782 END TEST bdev_nbd 00:18:43.782 ************************************ 00:18:43.782 13:31:32 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:43.782 13:31:32 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:18:43.782 13:31:32 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:18:43.782 13:31:32 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:18:43.782 13:31:32 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:43.782 13:31:32 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.782 13:31:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:43.782 ************************************ 00:18:43.782 START TEST bdev_fio 00:18:43.782 ************************************ 00:18:43.782 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:43.782 13:31:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:43.782 13:31:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:43.782 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:43.782 13:31:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:43.782 13:31:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:43.782 13:31:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:43.782 13:31:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:43.782 13:31:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:43.782 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:43.782 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:43.782 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:43.782 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:43.782 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:43.783 ************************************ 00:18:43.783 START TEST bdev_fio_rw_verify 00:18:43.783 ************************************ 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:43.783 13:31:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:44.042 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:44.042 fio-3.35 00:18:44.042 Starting 1 thread 00:18:56.250 00:18:56.250 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90012: Tue Nov 26 13:31:43 2024 00:18:56.250 read: IOPS=11.0k, BW=42.8MiB/s (44.9MB/s)(428MiB/10001msec) 00:18:56.250 slat (usec): min=19, max=1050, avg=21.85, stdev= 4.91 00:18:56.250 clat (usec): min=12, max=1306, avg=146.11, stdev=53.36 00:18:56.250 lat (usec): min=35, max=1329, avg=167.96, stdev=54.31 00:18:56.250 clat percentiles (usec): 00:18:56.250 | 50.000th=[ 151], 99.000th=[ 258], 99.900th=[ 351], 99.990th=[ 396], 00:18:56.250 | 99.999th=[ 1287] 00:18:56.250 write: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(443MiB/9880msec); 0 zone resets 00:18:56.250 slat (usec): min=9, max=194, avg=18.58, stdev= 4.60 00:18:56.250 clat (usec): min=65, max=1235, avg=333.66, stdev=49.92 00:18:56.250 lat (usec): min=82, max=1429, avg=352.24, stdev=51.62 00:18:56.250 clat percentiles (usec): 00:18:56.250 | 50.000th=[ 338], 99.000th=[ 490], 99.900th=[ 611], 99.990th=[ 1057], 00:18:56.250 | 99.999th=[ 1172] 00:18:56.250 bw ( KiB/s): min=41181, max=48840, per=98.86%, avg=45350.58, stdev=2057.72, samples=19 00:18:56.250 iops : min=10295, max=12210, avg=11337.63, stdev=514.46, samples=19 00:18:56.250 lat (usec) : 20=0.01%, 50=0.01%, 100=11.63%, 250=38.67%, 500=49.26% 00:18:56.250 lat (usec) : 750=0.42%, 1000=0.01% 00:18:56.250 lat (msec) : 2=0.01% 00:18:56.250 cpu : usr=98.65%, sys=0.48%, ctx=34, majf=0, minf=9118 00:18:56.250 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:56.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.250 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.250 issued rwts: total=109580,113302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.250 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:56.250 00:18:56.250 Run status group 0 (all jobs): 00:18:56.250 READ: bw=42.8MiB/s (44.9MB/s), 42.8MiB/s-42.8MiB/s (44.9MB/s-44.9MB/s), io=428MiB (449MB), run=10001-10001msec 00:18:56.250 WRITE: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=443MiB (464MB), run=9880-9880msec 00:18:56.250 ----------------------------------------------------- 00:18:56.250 Suppressions used: 00:18:56.250 count bytes template 00:18:56.250 1 7 /usr/src/fio/parse.c 00:18:56.250 326 31296 /usr/src/fio/iolog.c 00:18:56.250 1 8 libtcmalloc_minimal.so 00:18:56.250 1 904 libcrypto.so 00:18:56.250 ----------------------------------------------------- 00:18:56.250 00:18:56.250 00:18:56.250 real 0m12.450s 00:18:56.250 user 0m12.735s 00:18:56.250 sys 0m0.726s 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.250 ************************************ 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:56.250 END TEST bdev_fio_rw_verify 00:18:56.250 ************************************ 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "850b0867-b1f1-47ac-b00d-a38396d8c5da"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "850b0867-b1f1-47ac-b00d-a38396d8c5da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "850b0867-b1f1-47ac-b00d-a38396d8c5da",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "ed02a436-a620-49ea-8bcc-756db42fb8c0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "447693ae-e890-4360-ac04-cc6dd9675ce3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e446349a-8cef-44c5-8856-035600fbb438",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:56.250 /home/vagrant/spdk_repo/spdk 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:56.250 00:18:56.250 real 0m12.686s 00:18:56.250 user 0m12.854s 00:18:56.250 sys 0m0.818s 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.250 13:31:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:56.250 ************************************ 00:18:56.250 END TEST bdev_fio 00:18:56.250 ************************************ 00:18:56.250 13:31:44 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:56.250 13:31:44 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:56.250 13:31:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:56.250 13:31:44 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.250 13:31:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:56.250 ************************************ 00:18:56.250 START TEST bdev_verify 00:18:56.250 ************************************ 00:18:56.250 13:31:44 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:56.510 [2024-11-26 13:31:44.913648] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:18:56.510 [2024-11-26 13:31:44.913834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90170 ] 00:18:56.768 [2024-11-26 13:31:45.095400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:56.768 [2024-11-26 13:31:45.205396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.768 [2024-11-26 13:31:45.205407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.335 Running I/O for 5 seconds... 00:18:59.209 12644.00 IOPS, 49.39 MiB/s [2024-11-26T13:31:49.156Z] 12763.00 IOPS, 49.86 MiB/s [2024-11-26T13:31:50.106Z] 12727.33 IOPS, 49.72 MiB/s [2024-11-26T13:31:51.042Z] 12792.25 IOPS, 49.97 MiB/s [2024-11-26T13:31:51.042Z] 12682.20 IOPS, 49.54 MiB/s 00:19:02.472 Latency(us) 00:19:02.472 [2024-11-26T13:31:51.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.472 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:02.472 Verification LBA range: start 0x0 length 0x2000 00:19:02.472 raid5f : 5.01 6316.43 24.67 0.00 0.00 30260.25 385.40 27763.43 00:19:02.472 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:02.472 Verification LBA range: start 0x2000 length 0x2000 00:19:02.472 raid5f : 5.02 6324.86 24.71 0.00 0.00 30752.55 90.30 26333.56 00:19:02.472 [2024-11-26T13:31:51.042Z] =================================================================================================================== 00:19:02.472 [2024-11-26T13:31:51.042Z] Total : 12641.29 49.38 0.00 0.00 30506.80 90.30 27763.43 00:19:03.410 00:19:03.410 real 0m7.062s 00:19:03.410 user 0m12.954s 00:19:03.410 sys 0m0.333s 00:19:03.411 13:31:51 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.411 13:31:51 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:03.411 ************************************ 00:19:03.411 END TEST bdev_verify 00:19:03.411 ************************************ 00:19:03.411 13:31:51 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:03.411 13:31:51 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:03.411 13:31:51 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.411 13:31:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:03.411 ************************************ 00:19:03.411 START TEST bdev_verify_big_io 00:19:03.411 ************************************ 00:19:03.411 13:31:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:03.670 [2024-11-26 13:31:51.997763] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:19:03.670 [2024-11-26 13:31:51.997890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90263 ] 00:19:03.670 [2024-11-26 13:31:52.162706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:03.930 [2024-11-26 13:31:52.280982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.930 [2024-11-26 13:31:52.280996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.498 Running I/O for 5 seconds... 00:19:06.373 693.00 IOPS, 43.31 MiB/s [2024-11-26T13:31:56.323Z] 761.00 IOPS, 47.56 MiB/s [2024-11-26T13:31:57.277Z] 782.00 IOPS, 48.88 MiB/s [2024-11-26T13:31:58.214Z] 824.50 IOPS, 51.53 MiB/s [2024-11-26T13:31:58.214Z] 812.40 IOPS, 50.77 MiB/s 00:19:09.644 Latency(us) 00:19:09.644 [2024-11-26T13:31:58.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.644 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:09.644 Verification LBA range: start 0x0 length 0x200 00:19:09.644 raid5f : 5.19 416.21 26.01 0.00 0.00 7764291.44 178.73 320292.31 00:19:09.644 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:09.644 Verification LBA range: start 0x200 length 0x200 00:19:09.644 raid5f : 5.25 411.53 25.72 0.00 0.00 7676021.82 173.15 333637.82 00:19:09.644 [2024-11-26T13:31:58.214Z] =================================================================================================================== 00:19:09.644 [2024-11-26T13:31:58.214Z] Total : 827.74 51.73 0.00 0.00 7720156.63 173.15 333637.82 00:19:11.024 00:19:11.025 real 0m7.259s 00:19:11.025 user 0m13.394s 00:19:11.025 sys 0m0.326s 00:19:11.025 13:31:59 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.025 13:31:59 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:11.025 ************************************ 00:19:11.025 END TEST bdev_verify_big_io 00:19:11.025 ************************************ 00:19:11.025 13:31:59 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:11.025 13:31:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:11.025 13:31:59 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.025 13:31:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:11.025 ************************************ 00:19:11.025 START TEST bdev_write_zeroes 00:19:11.025 ************************************ 00:19:11.025 13:31:59 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:11.025 [2024-11-26 13:31:59.320296] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:19:11.025 [2024-11-26 13:31:59.320440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90356 ] 00:19:11.025 [2024-11-26 13:31:59.486042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.284 [2024-11-26 13:31:59.602510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.543 Running I/O for 1 seconds... 00:19:12.921 23415.00 IOPS, 91.46 MiB/s 00:19:12.921 Latency(us) 00:19:12.921 [2024-11-26T13:32:01.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.921 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:12.921 raid5f : 1.01 23397.86 91.40 0.00 0.00 5451.17 1817.13 15966.95 00:19:12.921 [2024-11-26T13:32:01.491Z] =================================================================================================================== 00:19:12.921 [2024-11-26T13:32:01.491Z] Total : 23397.86 91.40 0.00 0.00 5451.17 1817.13 15966.95 00:19:13.859 00:19:13.859 real 0m2.902s 00:19:13.859 user 0m2.462s 00:19:13.859 sys 0m0.313s 00:19:13.859 13:32:02 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.859 13:32:02 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:13.859 ************************************ 00:19:13.859 END TEST bdev_write_zeroes 00:19:13.859 ************************************ 00:19:13.859 13:32:02 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:13.859 13:32:02 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:13.860 13:32:02 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.860 13:32:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:13.860 ************************************ 00:19:13.860 START TEST bdev_json_nonenclosed 00:19:13.860 ************************************ 00:19:13.860 13:32:02 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:13.860 [2024-11-26 13:32:02.304440] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:19:13.860 [2024-11-26 13:32:02.304613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90408 ] 00:19:14.119 [2024-11-26 13:32:02.495399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.119 [2024-11-26 13:32:02.647156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.119 [2024-11-26 13:32:02.647319] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:14.119 [2024-11-26 13:32:02.647372] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:14.119 [2024-11-26 13:32:02.647391] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:14.379 00:19:14.379 real 0m0.675s 00:19:14.379 user 0m0.414s 00:19:14.379 sys 0m0.156s 00:19:14.379 13:32:02 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.379 13:32:02 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:14.379 ************************************ 00:19:14.379 END TEST bdev_json_nonenclosed 00:19:14.379 ************************************ 00:19:14.379 13:32:02 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:14.379 13:32:02 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:14.379 13:32:02 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.379 13:32:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:14.379 ************************************ 00:19:14.379 START TEST bdev_json_nonarray 00:19:14.379 ************************************ 00:19:14.379 13:32:02 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:14.639 [2024-11-26 13:32:03.033078] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:19:14.639 [2024-11-26 13:32:03.033291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90435 ] 00:19:14.898 [2024-11-26 13:32:03.223958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.898 [2024-11-26 13:32:03.379489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.898 [2024-11-26 13:32:03.379665] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:14.898 [2024-11-26 13:32:03.379694] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:14.898 [2024-11-26 13:32:03.379717] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:15.157 00:19:15.157 real 0m0.667s 00:19:15.157 user 0m0.420s 00:19:15.157 sys 0m0.142s 00:19:15.157 13:32:03 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.157 ************************************ 00:19:15.157 END TEST bdev_json_nonarray 00:19:15.157 ************************************ 00:19:15.157 13:32:03 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:15.157 13:32:03 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:19:15.157 13:32:03 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:19:15.157 13:32:03 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:19:15.157 13:32:03 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:15.157 13:32:03 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:19:15.157 13:32:03 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:15.157 13:32:03 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:15.157 13:32:03 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:15.157 13:32:03 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:15.157 13:32:03 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:15.157 13:32:03 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:15.157 00:19:15.157 real 0m45.505s 00:19:15.157 user 1m2.103s 00:19:15.157 sys 0m5.020s 00:19:15.157 13:32:03 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.157 13:32:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:15.157 ************************************ 00:19:15.157 END TEST blockdev_raid5f 00:19:15.157 ************************************ 00:19:15.157 13:32:03 -- spdk/autotest.sh@194 -- # uname -s 00:19:15.157 13:32:03 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:15.157 13:32:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:15.157 13:32:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:15.157 13:32:03 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:15.157 13:32:03 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:15.157 13:32:03 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:15.157 13:32:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:15.157 13:32:03 -- common/autotest_common.sh@10 -- # set +x 00:19:15.417 13:32:03 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:15.417 13:32:03 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:15.417 13:32:03 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:15.417 13:32:03 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:15.417 13:32:03 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:15.417 13:32:03 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:15.417 13:32:03 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:15.417 13:32:03 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:15.417 13:32:03 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:15.417 13:32:03 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:15.417 13:32:03 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:15.417 13:32:03 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:15.417 13:32:03 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:15.417 13:32:03 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:15.417 13:32:03 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:15.417 13:32:03 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:15.417 13:32:03 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:15.417 13:32:03 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:15.417 13:32:03 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:15.417 13:32:03 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:15.417 13:32:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:15.417 13:32:03 -- common/autotest_common.sh@10 -- # set +x 00:19:15.417 13:32:03 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:15.417 13:32:03 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:15.417 13:32:03 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:15.417 13:32:03 -- common/autotest_common.sh@10 -- # set +x 00:19:17.322 INFO: APP EXITING 00:19:17.322 INFO: killing all VMs 00:19:17.322 INFO: killing vhost app 00:19:17.322 INFO: EXIT DONE 00:19:17.322 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:17.322 Waiting for block devices as requested 00:19:17.322 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:17.580 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:18.149 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:18.408 Cleaning 00:19:18.408 Removing: /var/run/dpdk/spdk0/config 00:19:18.408 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:18.408 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:18.408 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:18.408 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:18.408 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:18.408 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:18.408 Removing: /dev/shm/spdk_tgt_trace.pid56593 00:19:18.408 Removing: /var/run/dpdk/spdk0 00:19:18.408 Removing: /var/run/dpdk/spdk_pid56369 00:19:18.408 Removing: /var/run/dpdk/spdk_pid56593 00:19:18.408 Removing: /var/run/dpdk/spdk_pid56811 00:19:18.408 Removing: /var/run/dpdk/spdk_pid56915 00:19:18.408 Removing: /var/run/dpdk/spdk_pid56960 00:19:18.408 Removing: /var/run/dpdk/spdk_pid57088 00:19:18.408 Removing: /var/run/dpdk/spdk_pid57106 00:19:18.408 Removing: /var/run/dpdk/spdk_pid57311 00:19:18.408 Removing: /var/run/dpdk/spdk_pid57416 00:19:18.408 Removing: /var/run/dpdk/spdk_pid57512 00:19:18.408 Removing: /var/run/dpdk/spdk_pid57623 00:19:18.408 Removing: /var/run/dpdk/spdk_pid57731 00:19:18.408 Removing: /var/run/dpdk/spdk_pid57765 00:19:18.408 Removing: /var/run/dpdk/spdk_pid57807 00:19:18.408 Removing: /var/run/dpdk/spdk_pid57878 00:19:18.408 Removing: /var/run/dpdk/spdk_pid57978 00:19:18.408 Removing: /var/run/dpdk/spdk_pid58436 00:19:18.408 Removing: /var/run/dpdk/spdk_pid58506 00:19:18.408 Removing: /var/run/dpdk/spdk_pid58574 00:19:18.408 Removing: /var/run/dpdk/spdk_pid58590 00:19:18.408 Removing: /var/run/dpdk/spdk_pid58727 00:19:18.408 Removing: /var/run/dpdk/spdk_pid58749 00:19:18.408 Removing: /var/run/dpdk/spdk_pid58886 00:19:18.408 Removing: /var/run/dpdk/spdk_pid58902 00:19:18.408 Removing: /var/run/dpdk/spdk_pid58966 00:19:18.408 Removing: /var/run/dpdk/spdk_pid58986 00:19:18.408 Removing: /var/run/dpdk/spdk_pid59051 00:19:18.408 Removing: /var/run/dpdk/spdk_pid59069 00:19:18.408 Removing: /var/run/dpdk/spdk_pid59261 00:19:18.408 Removing: /var/run/dpdk/spdk_pid59297 00:19:18.408 Removing: /var/run/dpdk/spdk_pid59386 00:19:18.408 Removing: /var/run/dpdk/spdk_pid60719 00:19:18.408 Removing: /var/run/dpdk/spdk_pid60931 00:19:18.408 Removing: /var/run/dpdk/spdk_pid61071 00:19:18.408 Removing: /var/run/dpdk/spdk_pid61715 00:19:18.408 Removing: /var/run/dpdk/spdk_pid61931 00:19:18.408 Removing: /var/run/dpdk/spdk_pid62071 00:19:18.408 Removing: /var/run/dpdk/spdk_pid62720 00:19:18.408 Removing: /var/run/dpdk/spdk_pid63049 00:19:18.408 Removing: /var/run/dpdk/spdk_pid63189 00:19:18.408 Removing: /var/run/dpdk/spdk_pid64585 00:19:18.408 Removing: /var/run/dpdk/spdk_pid64834 00:19:18.408 Removing: /var/run/dpdk/spdk_pid64978 00:19:18.408 Removing: /var/run/dpdk/spdk_pid66381 00:19:18.408 Removing: /var/run/dpdk/spdk_pid66634 00:19:18.408 Removing: /var/run/dpdk/spdk_pid66774 00:19:18.408 Removing: /var/run/dpdk/spdk_pid68177 00:19:18.408 Removing: /var/run/dpdk/spdk_pid68628 00:19:18.408 Removing: /var/run/dpdk/spdk_pid68768 00:19:18.408 Removing: /var/run/dpdk/spdk_pid70271 00:19:18.408 Removing: /var/run/dpdk/spdk_pid70537 00:19:18.408 Removing: /var/run/dpdk/spdk_pid70677 00:19:18.408 Removing: /var/run/dpdk/spdk_pid72179 00:19:18.408 Removing: /var/run/dpdk/spdk_pid72440 00:19:18.408 Removing: /var/run/dpdk/spdk_pid72587 00:19:18.408 Removing: /var/run/dpdk/spdk_pid74086 00:19:18.408 Removing: /var/run/dpdk/spdk_pid74584 00:19:18.408 Removing: /var/run/dpdk/spdk_pid74724 00:19:18.408 Removing: /var/run/dpdk/spdk_pid74868 00:19:18.408 Removing: /var/run/dpdk/spdk_pid75301 00:19:18.408 Removing: /var/run/dpdk/spdk_pid76055 00:19:18.408 Removing: /var/run/dpdk/spdk_pid76432 00:19:18.408 Removing: /var/run/dpdk/spdk_pid77127 00:19:18.408 Removing: /var/run/dpdk/spdk_pid77585 00:19:18.408 Removing: /var/run/dpdk/spdk_pid78362 00:19:18.408 Removing: /var/run/dpdk/spdk_pid78771 00:19:18.408 Removing: /var/run/dpdk/spdk_pid80763 00:19:18.408 Removing: /var/run/dpdk/spdk_pid81207 00:19:18.408 Removing: /var/run/dpdk/spdk_pid81647 00:19:18.666 Removing: /var/run/dpdk/spdk_pid83760 00:19:18.666 Removing: /var/run/dpdk/spdk_pid84253 00:19:18.666 Removing: /var/run/dpdk/spdk_pid84765 00:19:18.666 Removing: /var/run/dpdk/spdk_pid85829 00:19:18.666 Removing: /var/run/dpdk/spdk_pid86152 00:19:18.666 Removing: /var/run/dpdk/spdk_pid87102 00:19:18.666 Removing: /var/run/dpdk/spdk_pid87431 00:19:18.666 Removing: /var/run/dpdk/spdk_pid88375 00:19:18.666 Removing: /var/run/dpdk/spdk_pid88704 00:19:18.666 Removing: /var/run/dpdk/spdk_pid89386 00:19:18.666 Removing: /var/run/dpdk/spdk_pid89655 00:19:18.666 Removing: /var/run/dpdk/spdk_pid89710 00:19:18.666 Removing: /var/run/dpdk/spdk_pid89748 00:19:18.666 Removing: /var/run/dpdk/spdk_pid89997 00:19:18.666 Removing: /var/run/dpdk/spdk_pid90170 00:19:18.666 Removing: /var/run/dpdk/spdk_pid90263 00:19:18.666 Removing: /var/run/dpdk/spdk_pid90356 00:19:18.666 Removing: /var/run/dpdk/spdk_pid90408 00:19:18.666 Removing: /var/run/dpdk/spdk_pid90435 00:19:18.666 Clean 00:19:18.666 13:32:07 -- common/autotest_common.sh@1453 -- # return 0 00:19:18.666 13:32:07 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:18.666 13:32:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:18.666 13:32:07 -- common/autotest_common.sh@10 -- # set +x 00:19:18.666 13:32:07 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:18.666 13:32:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:18.666 13:32:07 -- common/autotest_common.sh@10 -- # set +x 00:19:18.666 13:32:07 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:18.666 13:32:07 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:18.667 13:32:07 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:18.667 13:32:07 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:18.667 13:32:07 -- spdk/autotest.sh@398 -- # hostname 00:19:18.667 13:32:07 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:18.926 geninfo: WARNING: invalid characters removed from testname! 00:19:40.859 13:32:27 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:42.767 13:32:30 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:44.782 13:32:33 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:46.709 13:32:35 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:49.245 13:32:37 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:51.776 13:32:39 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:54.312 13:32:42 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:19:54.312 13:32:42 -- spdk/autorun.sh@1 -- $ timing_finish 00:19:54.312 13:32:42 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:19:54.312 13:32:42 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:54.312 13:32:42 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:19:54.312 13:32:42 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:54.312 + [[ -n 5208 ]] 00:19:54.312 + sudo kill 5208 00:19:54.322 [Pipeline] } 00:19:54.338 [Pipeline] // timeout 00:19:54.343 [Pipeline] } 00:19:54.358 [Pipeline] // stage 00:19:54.364 [Pipeline] } 00:19:54.378 [Pipeline] // catchError 00:19:54.388 [Pipeline] stage 00:19:54.391 [Pipeline] { (Stop VM) 00:19:54.403 [Pipeline] sh 00:19:54.683 + vagrant halt 00:19:57.217 ==> default: Halting domain... 00:20:03.799 [Pipeline] sh 00:20:04.080 + vagrant destroy -f 00:20:06.616 ==> default: Removing domain... 00:20:06.885 [Pipeline] sh 00:20:07.166 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:07.175 [Pipeline] } 00:20:07.189 [Pipeline] // stage 00:20:07.194 [Pipeline] } 00:20:07.208 [Pipeline] // dir 00:20:07.213 [Pipeline] } 00:20:07.227 [Pipeline] // wrap 00:20:07.234 [Pipeline] } 00:20:07.246 [Pipeline] // catchError 00:20:07.255 [Pipeline] stage 00:20:07.257 [Pipeline] { (Epilogue) 00:20:07.270 [Pipeline] sh 00:20:07.552 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:12.844 [Pipeline] catchError 00:20:12.846 [Pipeline] { 00:20:12.859 [Pipeline] sh 00:20:13.141 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:13.141 Artifacts sizes are good 00:20:13.150 [Pipeline] } 00:20:13.164 [Pipeline] // catchError 00:20:13.185 [Pipeline] archiveArtifacts 00:20:13.206 Archiving artifacts 00:20:13.307 [Pipeline] cleanWs 00:20:13.319 [WS-CLEANUP] Deleting project workspace... 00:20:13.320 [WS-CLEANUP] Deferred wipeout is used... 00:20:13.326 [WS-CLEANUP] done 00:20:13.328 [Pipeline] } 00:20:13.343 [Pipeline] // stage 00:20:13.348 [Pipeline] } 00:20:13.360 [Pipeline] // node 00:20:13.364 [Pipeline] End of Pipeline 00:20:13.404 Finished: SUCCESS